Saving Science

Science, pride of modernity, our one source of objective knowledge, is in deep trouble. Stoked by fifty years of growing public investments, scientists are more productive than ever, pouring out millions of articles in thousands of journals covering an ever-expanding array of fields and phenomena. But much of this supposed knowledge is turning out to be contestable, unreliable, unusable, or flat-out wrong. From metastatic cancer to climate change to growth economics to dietary standards, science that is supposed to yield clarity and solutions is in many instances leading instead to contradiction, controversy, and confusion. Along the way it is also undermining the four-hundred-year-old idea that wise human action can be built on a foundation of independently verifiable truths. Science is trapped in a self-destructive vortex; to escape, it will have to abdicate its protected political status and embrace both its limits and its accountability to the rest of society.

The story of how things got to this state is difficult to unravel, in no small part because the scientific enterprise is so well-defended by walls of hype, myth, and denial. But much of the problem can be traced back to a bald-faced but beautiful lie upon which rests the political and cultural power of science. This lie received its most compelling articulation just as America was about to embark on an extended period of extraordinary scientific, technological, and economic growth. It goes like this:

Scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.


“The free play of free intellects…dictated by their curiosity”

So deeply embedded in our cultural psyche that it seems like an echo of common sense, this powerful vision of science comes from Vannevar Bush, the M.I.T. engineer who had been the architect of the nation’s World War II research enterprise, which delivered the atomic bomb and helped to advance microwave radar, mass production of antibiotics, and other technologies crucial to the Allied victory. He became justly famous in the process. Featured on the cover of Time magazine, he was dubbed the “General of Physics.” As the war drew to a close, Bush envisioned transitioning American science to a new era of peace, where top academic scientists would continue to receive the robust government funding they had grown accustomed to since Pearl Harbor but would no longer be shackled to the narrow dictates of military need and application, not to mention discipline and secrecy. Instead, as he put it in his July 1945 report Science, The Endless Frontier, by pursuing “research in the purest realms of science” scientists would build the foundation for “new products and new processes” to deliver health, full employment, and military security to the nation.

From this perspective, the lie as Bush told it was perhaps less a conscious effort to deceive than a seductive manipulation, for political aims, of widely held beliefs about the purity of science. Indeed, Bush’s efforts to establish the conditions for generous and long-term investments in science were extraordinarily successful, with U.S. federal funding for “basic research” rising from $265 million in 1953 to $38 billion in 2012, a twentyfold increase when adjusted for inflation. More impressive still was the increase for basic research at universities and colleges, which rose from $82 million to $24 billion, a more than fortyfold increase when adjusted for inflation. By contrast, government spending on more “applied research” at universities was much less generous, rising to just under $10 billion. The power of the lie was palpable: “the free play of free intellects” would provide the knowledge that the nation needed to confront the challenges of the future.

To go along with all that money, the beautiful lie provided a politically brilliant rationale for public spending with little public accountability. Politicians delivered taxpayer funding to scientists, but only scientists could evaluate the research they were doing. Outside efforts to guide the course of science would only interfere with its free and unpredictable advance.

Vannevar BushHank Walker/The LIFE Picture Collection/Getty Images

The fruits of curiosity-driven scientific exploration into the unknown have often been magnificent. The recent discovery of gravitational waves — an experimental confirmation of Einstein’s theoretical work from a century earlier — provided a high-publicity culmination of billions of dollars of public spending and decades of research by large teams of scientists. Multi-billion dollar investments in space exploration have yielded similarly startling knowledge about our solar system, such as the recent evidence of flowing water on Mars. And, speaking of startling, anthropologists and geneticists have used genome-sequencing technologies to offer evidence that early humans interbred with two other hominin species, Neanderthals and Denisovans. Such discoveries heighten our sense of wonder about the universe and about ourselves.

And somehow, it would seem, even as scientific curiosity stokes ever-deepening insight about the fundamental workings of our world, science managed simultaneously to deliver a cornucopia of miracles on the practical side of the equation, just as Bush predicted: digital computers, jet aircraft, cell phones, the Internet, lasers, satellites, GPS, digital imagery, nuclear and solar power. When Bush wrote his report, nothing made by humans was orbiting the earth; software didn’t exist; smallpox still did.

So one might be forgiven for believing that this amazing effusion of technological change truly was the product of “the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.” But one would be mostly wrong.

Science has been important for technological development, of course. Scientists have discovered and probed phenomena that turned out to have enormously broad technological applications. But the miracles of modernity in the above list came not from “the free play of free intellects,” but from the leashing of scientific creativity to the technological needs of the U.S. Department of Defense (DOD).

The story of how DOD mobilized science to help create our world exposes the lie for what it is and provides three difficult lessons that have to be learned if science is to evade the calamity it now faces.

First, scientific knowledge advances most rapidly, and is of most value to society, not when its course is determined by the “free play of free intellects” but when it is steered to solve problems — especially those related to technological innovation.

Second, when science is not steered to solve such problems, it tends to go off half-cocked in ways that can be highly detrimental to science itself.

Third — and this is the hardest and scariest lesson — science will be made more reliable and more valuable for society today not by being protected from societal influences but instead by being brought, carefully and appropriately, into a direct, open, and intimate relationship with those influences.

How DOD Gave Science Its Mojo

Almost immediately after World War II, the Department of War — soon renamed the Department of Defense — began to harness together the complete set of players necessary to ensure the United States would have all the technologies needed to win the Cold War. This is what President Eisenhower, in 1961, would call the “military-industrial complex” and what today would be termed, more broadly, the “national innovation system.” It includes everything from university laboratories and scientists, to the small and large companies that develop and commercialize innovations, to the users of those innovations — in this case, the military itself. DOD was able to catalyze rapid innovation because money was, more or less, no object; the mission — ensuring that America’s military technologies were better than anyone else’s — was all that mattered.

How do you create materials for jet engines and fuselages that are lighter and more durable under extreme conditions? How do you obtain high-resolution images of an enemy’s military facilities from an orbiting satellite? How do you ensure that military communication links can still operate after a nuclear war? These are the types of questions that the military needed to have answered, questions that demanded advances in fundamental knowledge as well as technological know-how. DOD’s needs provided not just investments in but also a powerful focus for advances in basic research in fields ranging from high-energy physics to materials science to fluid dynamics to molecular biology.

At the same time, protected from both the logic of the marketplace and the capriciousness of politics by the imperative of national defense, DOD was a demanding customer for some of the most advanced technological products that high-tech corporations could produce. For example, the first digital computer — built in the mid-1940s to calculate the trajectories of artillery shells and used to design the first hydrogen bomb — cost about $500,000 (around $4.7 million today), operated billions of times more slowly than modern computers, took up the space of a small bus, and had no immediate commercial application. Who but the Pentagon would buy such a crazy thing? But DOD also supported the science needed to keep innovation going. In the late 1950s and well into the 1960s, as the role for computers in military affairs was growing but the science wasn’t keeping up, DOD’s Advanced Research Projects Agency essentially created computer science as an academic discipline by funding work at M.I.T., Carnegie Mellon, Stanford, and other institutions.

Another example: The earliest jet engines, back in the 1940s, needed to be overhauled about every hundred hours and were forty-five times less fuel-efficient than piston engines. Why waste public money on such a technology? Because military planners knew that jet power promised combat performance greatly superior to planes powered by piston engines. For decades the Air Force and Navy funded research and development in the aircraft industry to continually drive improvement of jet engines. Meanwhile, the Boeing Company could take the jet-engine-powered aerial fuel tanker it was developing for the Air Force and use a similar design for its 707 passenger jet, the first truly safe and reliable commercial jet aircraft.

And another: AT&T’s Bell Labs, where the transistor effect was discovered, could use the demands (and investments) of the Army Signal Corps for smaller and more reliable battlefield communication technologies to improve scientific understanding of semiconducting materials as well as the reliability and performance of transistors. It was military purchases that kept the new transistor, semiconductor, and integrated-circuit industries afloat in the early and mid-1950s. As historian Thomas Misa explained in his study of DOD’s role in stimulating the development of transistors: “By subsidizing engineering development and the construction of manufacturing facilities … the military catalyzed the establishment of an industrial base” — helping to create the technological and industrial backbone for the information age. And new weapons such as missile systems and ever-more powerful nuclear warheads continued to drive the development of and demand for increasingly sophisticated and reliable electronic components such as microprocessors and supercomputers.

Today, DOD continues to push rapid innovation in select areas, including robotics (especially for drone warfare) and human enhancement (for example, to improve the battlefield performance of soldiers). But through a combination of several factors — including excessive bureaucratic growth, interference from Congress, and long-term commitments to hugely expensive and troubled weapons systems with little civilian spillover potential, such as missile defense and the F-35 joint strike fighter — the Pentagon’s creativity and productivity as an innovator has significantly dissipated.

Yet the scientific and technological foundations that DOD helped to create during the Cold War continue to support the American economy. To take just one example, of the thirteen areas of technological advance that were essential to the development of the iPhone, eleven — including the microprocessor, GPS, and the Internet — can be traced back to vital military investments in research and technological development.

Americans lionize the scientist as head-in-the-clouds genius (the Einstein hero) and the inventor as misfit-in-the-garage genius (the Steve Jobs or Bill Gates hero). The discomfiting reality, however, is that much of today’s technological world exists because of DOD’s role in catalyzing and steering science and technology. This was industrial policy, and it worked because it brought all of the players in the innovation game together, disciplined them by providing strategic, long-term focus for their activities, and shielded them from the market rationality that would have doomed almost every crazy, over-expensive idea that today makes the world go round. The great accomplishments of the military-industrial complex did not result from allowing scientists to pursue “subjects of their own choice, in the manner dictated by their curiosity,” but by channeling that curiosity toward the solution of problems that DOD wanted to solve.

Such goal-driven industrial policies are supposed to be the stuff of Soviet five-year plans, not market-based democracies, and neither scientists nor policymakers have had much of an appetite for recognizing DOD’s role in creating the foundations of our modern economy and society. Vannevar Bush’s beautiful lie has been a much more appealing explanation, ideologically and politically. Not everyone, however, has been fooled.

War on Cancer

Fran Visco was diagnosed with breast cancer in 1987. A Philadelphia trial lawyer intimidated by no one, she chose to be treated with a less toxic chemotherapy than the one her doctor recommended. She also started volunteering for a local breast cancer patient-support group, which soon led to an invitation to the organizing meeting of what came to be called the National Breast Cancer Coalition. NBCC was conceived as a political advocacy organization that would provide a unified voice for local patient groups across the nation — an approach that appealed to Visco’s activist nature. She became the organization’s first president, and has ever since been a national leader for mobilizing science, medicine, policy, and politics around the goal of eliminating the disease.

Visco was a child of the lie. “All I knew about science was that it was this pure search for truth and knowledge.” So, logically enough, she and the other activists at NBCC started out by trying to get more money for breast cancer research at the country’s most exalted research organization, the National Institutes of Health’s National Cancer Institute. But Visco was also a child of the Sixties with a penchant for questioning authority, and she wanted to play an active role in figuring out how much money was needed for research and how best to spend it. She and her NBCC colleagues identified a community of cancer researchers that they thought were particularly innovative, and brought everyone together in February 1992 to discuss what was needed to find cures more quickly and how much it would cost. Together, the advocates and scientists determined that $300 million of new money could be absorbed and well spent by the scientific community — a goal that found strong support in Congress. Meanwhile, Visco and other patient-advocates began to immerse themselves deeply in the science so they could “have a seat at the table and figure out how those dollars should be spent.”

Through an accident of congressional budgeting, it turned out that the only way to meet the $300 million goal was to have most of the money allocated to the Department of Defense. So in November 1992, Congress appropriated $210 million for a peer-reviewed breast cancer research program to be administered by the Army. The initial plan was to have most of the money transferred to the National Cancer Institute, but when Visco and her NBCC colleagues met with NCI officials to discuss how best to spend the new dollars, Director Sam Broder explained how difficult it was to influence the momentum of science because priorities were established by the bottom-up interests of the research community itself. This, Visco said, “gave us absolutely no comfort that he was going to do anything differently.”

When Visco went to DOD, “it was a completely different meeting.” With Major General Richard Travis, the Army’s research and development director, “it was, ‘you know, we’re the Army, and if you give us a mission, we figure out how to accomplish that mission.’” It was, “‘Ladies, I’m going to lead you into battle and we’re going to win the war.’”

“At some point, you really have to save a life”: Fran Visco speaks at an NBCC event at the U.S. Capitol in 2007Nancy Ostertag/Getty Images

Although Visco was at first “terrified” to find herself working with the military, she also found it refreshing and empowering — a “fantastic collaboration and partnership.” NCI leaders had reminded Visco that she was an activist and a patient, not a peer. But Gen. Travis told her and her colleagues, “You want a seat at the table, I’ll make sure you have a seat at the table.” The Army welcomed the participation of patient-activists in the planning process for the breast cancer program, directly involved them in the final selection of scientific projects to be funded, and eventually even brought them into the processes for reviewing the merits of various research proposals.

DOD’s can-do approach, its enthusiasm about partnering with patient-advocates, and its dedication to solving the problem of breast cancer — rather than simply advancing our scientific understanding of the disease — won Visco over. And it didn’t take long for benefits to appear. During its first round of grantmaking in 1993–94, the program funded research on a new, biologically based targeted breast cancer therapy — a project that had already been turned down multiple times by NIH’s peer-review system because the conventional wisdom was that targeted therapies wouldn’t work. The DOD-funded studies led directly to the development of the drug Herceptin, one of the most important advances in breast cancer treatment in recent decades.

According to Dennis Slamon, the lead scientist on that project, the openness of the DOD program to funding projects like his that went against mainstream scientific beliefs was due to the patient-activists. “Absolutely, unequivocally, no question. The scientific community, perhaps even myself included, were skeptical that it was going to be doable — that a bunch of laypeople, who weren’t trained in-depth in science, were going to sit at the table and really be involved in the peer-review process in a meaningful way. And we could not have been more wrong.”

There have been few major advances in breast cancer treatment since then, but one of the most promising — a targeted therapy called palbociclib — was funded by the same DOD program and was approved by the FDA in 2015 after successful clinical trials. Despite the objections of scientists advising the program, patient-advocates also pushed DOD to ramp up funding for immunological approaches to curing breast cancer, including support for vaccine research too unconventional to be supported by either NCI or the pharmaceutical industry.

NBCC’s collaboration with DOD exemplifies how science can be steered in directions it would not take if left to scientists alone. But that turned out not to be enough. Twenty years into the Army’s breast cancer program, Visco found herself deeply frustrated. The Army was providing grants for innovative, high-risk proposals that might not have been funded by NCI. But that’s where the program’s influence ended. What Visco and Gen. Travis had failed to appreciate was that, when it came to breast cancer, the program lacked the key ingredient that made DOD such a successful innovator in other fields: the money and control needed to coordinate all the players in the innovation system and hold them accountable for working toward a common goal. And so, as NBCC and other groups brought more and more money into the research system through their effective lobbying campaigns, it grew clear to Visco that the main beneficiaries were individual scientists attracted by the new funding — not breast cancer patients. To be sure, the DOD support for innovative research is “better than what’s happening at NCI and NIH, but it’s not better enough … it’s innovation within the existing system.”

Ultimately, “all the money that was thrown at breast cancer created more problems than success,” Visco says. What seemed to drive many of the scientists was the desire to “get above the fold on the front page of the New York Times,” not to figure out how to end breast cancer. It seemed to her that creativity was being stifled as researchers displayed “a lemming effect,” chasing abundant research dollars as they rushed from one hot but ultimately fruitless topic to another. “We got tired of seeing so many people build their careers around one gene or one protein,” she says. Visco has a scientist’s understanding of the extraordinary complexity of breast cancer and the difficulties of making progress toward a cure. But when it got to the point where NBCC had helped bring $2 billion to the DOD program, she started asking: “And what? And what is there to show? You want to do this science and what?”

“At some point,” Visco says, “you really have to save a life.”

The Measure of Progress

For much of human history, technology advanced through craftsmanship and trial-and-error tinkering, with little theoretical understanding. The systematic study of nature — what we today call science — was a distinct domain, making little or no contribution to technological development. Yet technology has contributed in obvious ways to scientific advance for centuries, as practical tools such as lenses, compasses, and clocks allowed scientists to study nature with ever greater accuracy and resolution. The relationship only started to swing both ways, with science contributing to technological advancement as well as benefiting from it, in the nineteenth century as, for example, organic chemistry both emerged from and found application in the German dye-making industry.

And as the Industrial Revolution came to link technological innovation to historically unprecedented economic growth, scientists began to make many important contributions to fundamental knowledge by studying phenomena whose existence was brought to light only because of the new technologies of an industrializing world. Efforts to improve the performance of steam engines, wine manufacturing, steel-making, and telephone communication — to name just a few — guided much scientific inquiry, and, in some cases led to entirely new fields of basic research, such as thermodynamics, bacteriology, and radio astronomy. New technologies also provided discipline and focus for areas of fundamental science that had been progressing slowly, as vaccines did for immunology and airplanes did for theoretical aerodynamics.

Science has been such a wildly successful endeavor over the past two hundred years in large part because technology blazed a path for it to follow. Not only have new technologies created new worlds, new phenomena, and new questions for science to explore, but technological performance has provided a continuous, unambiguous demonstration of the validity of the science being done. The electronics industry and semiconductor physics progressed hand-in-hand not because scientists, working “in the manner dictated by their curiosity for exploration of the unknown,” kept lobbing new discoveries over the lab walls that then allowed transistor technology to advance, but because the quest to improve technological performance constantly raised new scientific questions and demanded advances in our understanding of the behavior of electrons in different types of materials.

Or, again, consider how the rapid development of computers beginning in the 1950s, catalyzed by DOD, led to the demand for new types of theories and knowledge about how to acquire, store, and process digital information — a new science for a new technology. Thirty years later, computer scientists were probing phenomena in a rapidly developing technological realm that had never existed before — cyberspace and the World Wide Web — and were asking questions that could never have been imagined, let alone answered, before. The National Science Foundation funded basic research into this new, technology-created realm, including grants to two graduate students in computer science at Stanford University who wanted to understand how best to navigate the novel and expanding landscape of digital information. They published their results in the 1998 article “The Anatomy of a Large-Scale Hypertextual Web Search Engine.” The abstract begins: “In this paper, we present Google …” — the web-search protocol that led to the corporate empire whose technologies are today woven into the fabric of daily life, and whose economic and social influence is every bit as powerful as the great railroad, steel, automobile, and telecommunications corporations of previous technological revolutions. Technology led; science followed.

If, as Visco says, “at some point you really have to save a life,” it will be a technology, perhaps a vaccine or drug, that does the job. Technology is what links science to human experience; it is what makes science real for us. A light switch, a jet aircraft, or a measles vaccine, these are cause-and-effect machines that turn phenomena that can be described by science — the flow of electrons, the movement of air molecules, the stimulation of antibodies — into reliable outcomes: the light goes on, the jet flies, the child becomes immune. The scientific phenomena must be real or the technologies would not work.

Vannevar Bush’s beautiful lie makes it easy to believe that scientific imagination gives birth to technological progress, when in reality technology sets the agenda for science, guiding it in its most productive directions and providing continual tests of its validity, progress, and value. Absent their real-world validation through technology, scientific truths would be mere abstractions. Here is where the lie exercises its most corrupting power: If we think that scientific progress is best pursued by “the free play of free intellects,” we give science a free pass to define progress without reference to the world beyond it. But if there is nothing by which to measure scientific progress outside of science itself, how can we know when our knowledge is advancing, standing still, or moving backwards?

It turns out that we cannot.

Einstein, We Have a Problem

The science world has been buffeted for nearly a decade by growing revelations that major bodies of scientific knowledge, published in peer-reviewed papers, may simply be wrong. Among recent instances: a cancer cell line used as the basis for over a thousand published breast cancer research studies was revealed to be actually a skin cancer cell line; a biotechnology company was able to replicate only six out of fifty-three “landmark” published studies it sought to validate; a test of more than one hundred potential drugs for treating amyotrophic lateral sclerosis in mice was unable to reproduce any of the positive findings that had been reported from previous studies; a compilation of nearly one hundred fifty clinical trials for therapies to block human inflammatory response showed that even though the therapies had supposedly been validated using mouse model experiments, every one of the trials failed in humans; a statistical assessment of the use of functional magnetic resonance imaging (fMRI) to map human brain function indicated that up to 70 percent of the positive findings reported in approximately 40,000 published fMRI studies could be false; and an article assessing the overall quality of basic and preclinical biomedical research estimated that between 75 and 90 percent of all studies are not reproducible. Meanwhile, a painstaking effort to assess the quality of one hundred peer-reviewed psychology experiments was able to replicate only 39 percent of the original papers’ results; annual mammograms, once the frontline of the war on breast cancer, have been shown to confer little benefit for women in their forties; and, of course, we’ve all been relieved to learn after all these years that saturated fat actually isn’t that bad for us. The number of retracted scientific publications rose tenfold during the first decade of this century, and although that number still remains in the mere hundreds, the growing number of studies such as those mentioned above suggests that poor quality, unreliable, useless, or invalid science may in fact be the norm in some fields, and the number of scientifically suspect or worthless publications may well be counted in the hundreds of thousands annually. While most of the evidence of poor scientific quality is coming from fields related to health, biomedicine, and psychology, the problems are likely to be as bad or worse in many other research areas. For example, a survey of statistical practices in economics research concluded that“the credibility of the economics literature is likely to be modest or even low.”

Morgan Ray Schweitzer (morganrayschweitzer.com)

What is to be made of this ever-expanding litany of dispiriting revelations and reversals? Well, one could celebrate. “Instances in which scientists detect and address flaws in work constitute evidence of success, not failure,” a group of leaders of the American science establishment — including the past, present, and future presidents of the National Academy of Sciences — wrote in Science in 2015, “because they demonstrate the underlying protective mechanisms of science at work.” But this happy posture ignores the systemic failings at the heart of science’s problems today.

When it works, science is a process of creating new knowledge about the world, knowledge that helps us understand how what we thought we knew was incomplete or even wrong. This picture of success doesn’t mean, however, that we should reasonably expect that most scientific results are unreliable or invalid at the moment they are published. What it means, instead, is that the results of research — however imperfect — are reliable in the context of the existing state of knowledge, and are thus a definite step toward a better understanding of our world and a solid foundation for further research. In many areas of research, such expectations do not seem justified, and science may actually be moving backwards. Richard Horton, editor-in-chief of The Lancet, puts it like this:

The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.

C. Glenn Begley and John Ioannidis — researchers who have been courageous and visionary in exposing systemic weakness in biomedical science — concluded in a January 2015 article that “it is impossible to endorse an approach that suggests that we proceed with an ongoing research investment that is producing results the majority of which cannot be substantiated and will not stand the test of time.” Similarly, an economic analysis published in June 2015 estimates that $28 billion per year is wasted on biomedical research that is unreproducible. Science isn’t self-correcting; it’s self-destructing.

Part of the problem surely has to do with the pathologies of the science system itself. Academic science, especially, has become an onanistic enterprise worthy of Swift or Kafka. As a university scientist you are expected to produce a continual stream of startling and newsworthy findings. Here’s how the great biologist E. O. Wilson describes the life of an academic researcher:

You will need forty hours a week to perform teaching and administrative duties, another twenty hours on top of that to conduct respectable research, and still another twenty hours to accomplish really important research…. Make an important discovery, and you are a successful scientist in the true, elitist sense in a profession where elitism is practiced without shame…. Fail to discover, and you are little or nothing.

The professional incentives for academic scientists to assert their elite status are perverse and crazy, and promotion and tenure decisions focus above all on how many research dollars you bring in, how many articles you get published, and how often those articles are cited in other articles.

To bring in research grants, you need to show that your previous grants yielded “transformative” results and that your future work will do the same. To get papers published, you need to cite related publications that provide support for your hypotheses and findings. Meanwhile, the peers who review funding proposals and journal articles are playing in the same system, competing for the same funds, motivated by the same incentives. To get the research done you need graduate students and postdoctoral fellows to do most of the grunt work of running experiments and collecting data, which is how they get trained and acculturated to become the next generation of academic scientists behaving the same way. Universities — competing desperately for top faculty, the best graduate students, and government research funds — hype for the news media the results coming out of their laboratories, encouraging a culture in which every scientist claims to be doing path-breaking work that will solve some urgent social problem. (Scientists themselves are complicit in the hype machine: according toone study, the frequency of positive words like “innovative,” “novel,” “robust,” and “unprecedented” in biomedical research publications in 2014 was nearly nine times as high as it was forty years earlier.) The scientific publishing industry exists not to disseminate valuable information but to allow the ever-increasing number of researchers to publish more papers — now on the order of a couple million peer-reviewed articles per year — so that they can advance professionally. As of 2010, about 24,000 peer-reviewed scientific journals were being published worldwide to accommodate this demand.

These figures would not have shocked the historian of science and physicist Derek de Solla Price, who more than half a century ago observed that “science is so large that many of us begin to worry about the sheer mass of the monster we have created.” In his book Little Science, Big Science (1963), Price noted presciently that the number of scientists was growing so fast that it could only lead to a “scientific doomsday” of instability and stress, and that exponential growth of the scientific enterprise would bring with it declining scientific originality and quality, as the number of truly great scientists was progressively drowned out by the much more rapidly increasing number of merely competent ones.

One cumulative result of these converging stresses (a result that Price did not anticipate) is a well-recognized pervasive bias that infects every corner of the basic research enterprise — a bias toward the new result. Bias is an inescapable attribute of human intellectual endeavor, and it creeps into science in many different ways, from bad statistical practices to poor experimental or model design to mere wishful thinking. If biases are random then they should more or less balance each other out through multiple studies. But as numerous close observers of the scientific literature have shown, there are powerful sources of bias that push in one direction: come up with a positive result, show something new, different, eye-catching, transformational, something that announces you as part of the elite.

Yet, to fixate on systemic positive bias in an out-of-control research system is to miss the deeper and much more important point. The reason that bias seems able to infect research so easily today is that so much of science is detached from the goals and agendas of the military-industrial innovation system, which long gave research its focus and discipline. Nothing is left to keep research honest save the internal norms of the professional, peer-review system itself. And how well are those norms holding up? A survey of more than 1,500 scientists published by Nature in May 2016 shows that 80 percent or more believe that scientific practice is being undermined by such factors as “selective reporting” of data, publication pressure, poor statistical analysis, insufficient attention to replication, and inadequate peer review. In short, we are finding out what happens when objective inquiry is guided by Bush’s beautiful lie. “Scientific doomsday” indeed.

Lemmings Studying Mice

A neuroscientist by training, Susan Fitzpatrick worries a lot about science and what Price called the “sheer mass of the monster.” “The scientific enterprise used to be small, and in any particular area of research everyone knew each other; it had this sort of artisanal quality,” she says. “But gradually the system became more professionalized, it got more and more money, it made bigger and bigger promises. So the qualities that make scientific research reliable, honest, got undermined by the need to feed the beast, and the system got too big to succeed.” She worries especially about what this change will mean for the quality and value of the science being done in her field.

As president of the James S. McDonnell Foundation, which funds research on cognition and the brain, Fitzpatrick is concerned about where research dollars are flowing. Just as Visco observed what she called the “lemming effect” — researchers running from one hot topic to the next — Fitzpatrick also sees science as driven by a circular, internal logic. “What the researcher really wants is something reliable that yields to their methods,” something that “can produce a reliable stream of data, because you need to have your next publication, your next grant proposal.”

For example, scientists commonly use mouse brains to study neurodegenerative diseases like Parkinson’s or Alzheimer’s, or to study behavioral problems such as addictiveness or attention deficit disorders. What’s great about mice is that they yield to scientists’ methods. They can be bred in virtually limitless quantity, with particular traits designed into them, such as a gene mutation that triggers Alzheimer’s-like symptoms. This allows researchers to test specific hypotheses about, say, the genetics or neurochemistry of a mouse-brain disease.

More than one hundred different strains of mice have been developed for the purpose of studying Alzheimer’s, and numerous chemical compounds have been shown to slow the course of Alzheimer’s-like symptoms in mice. Yet despite the proliferation of mouse and other animal models, only one out of 244 compounds that made it to the trial stage in the decade between 2002 and 2012 was approved by the FDA as a treatment for humans — a 99.6 percent failure rate, and even the one drug approved for use in humans during that period doesn’t work very well. And why should it be otherwise? The last common ancestor of humans and mice lived 80 million years ago. “You’re using animals that don’t develop neurodegenerative disease on their own,” explains Fitzpatrick. “Even aged mice don’t develop Alzheimer’s disease.” So researchers force some characteristic to develop — such as beta-amyloid plaques on the mouse’s brain, or age-related cognitive decline — but that’s not the same as the human disease in question, “because the process whereby you create that model is not the pathogenesis of the disease. Your treatment is focused on how you created the model, not how the disease occurs naturally.” There is little reason to believe that what’s being learned from these animal models will put us on the right path to understanding — let alone curing — human brain disorders.

Not that such concerns are likely to put a damper on the research. A search for article titles or abstracts containing the words “brain” and “mouse” (or “mice” or “murine”) in the NIH’s PubMed database yields over 50,000 results for the decade between 2005 and 2015 alone. If you add the word “rat” to the mix, the number climbs to about 80,000. It’s a classic case of looking for your keys under the streetlight because that’s where the light is: the science is done just because it can be. The results get published and they get cited and that creates, Fitzpatrick says, “the sense that we’re gaining knowledge when we’re not gaining knowledge.”

Morgan Ray Schweitzer (morganrayschweitzer.com)

But it’s worse than that. Scientists cite one another’s papers because any given research finding needs to be justified and interpreted in terms of other research being done in related areas — one of those “underlying protective mechanisms of science.” But what if much of the science getting cited is, itself, of poor quality? Consider, for example, a 2012 report in Science showing that an Alzheimer’s drug called bexarotene would reduce beta-amyloid plaque in mouse brains. Efforts to reproduce that finding have since failed, as Science reported in February 2016. But in the meantime, the paper has been cited in about 500 other papers, many of which may have been cited multiple times in turn. In this way, poor-quality research metastasizes through the published scientific literature, and distinguishing knowledge that is reliable from knowledge that is unreliable or false or simply meaningless becomes impossible.

A scientific model allows you to study a simplified version, or isolated characteristics, of a complex phenomenon. This simplification is sometimes justified, for instance, if the cause-and-effect relations being studied in the model (say, the response of an airfoil to turbulence in a wind tunnel) operate in the same way in the more complex context (an airplane flying through a storm). In such cases you can have some confidence that what you’ve learned from the model can be applied to the actual problem at hand. Fitzpatrick thinks that such reasoning is not justified when using mouse brains to model human neurodegenerative disease.

But her concerns about this way of approaching brain science have more devastating implications when the models are extended still further to explore the neurological aspects of human behavioral dysfunction:

Because these questions are incredibly complex and we’re trying to reduce it to some biological models, you have to create proxies. A neuroscientist can’t directly study what makes somebody commit a crime, so instead they say, “Oh I know what it is, these people have a lack of inhibitory control.” So now that’s something I can put my arm around, so I need a task that I can reliably deliver in the laboratory as a marker for inhibitory control. “Oh, and we have one, there’s this reaction-time task …” Now we’re studying something, calling it something else, creating a causal hypothesis about people’s behavior that’s made up of tenuous links.

The problem, as Fitzpatrick explains it, is that in this space between the proxy — say, measuring inhibitory control in a mouse, or for that matter a person — and a complex behavior, such as drug addiction, lies a theory about what causes crime and addiction and anti-social behavior. The theory “has ideological underpinnings. It shapes the kind of questions that get asked, the way research gets structured, the findings that get profiled, the person that gets asked to give the big speech.”

Fitzpatrick is observing what happens when the interplay between science and technology is replaced by the “free play of free intellects.” Scientists can never escape the influence of human bias. But human bias doesn’t have much room to get a foothold when research is tightly linked to the performance of a particular technology — through, say, the desire for lighter, stronger automobile engines, or for faster, more efficient web search engines.

Technology keeps science honest. But for subjects that are incredibly complex, such as Alzheimer’s disease and criminal behavior, the connection between scientific knowledge and technology is tenuous and mediated by many assumptions — assumptions about how science works (mouse brains are good models for human brains); about how society works (criminal behavior is caused by brain chemistry); or about how technology works (drugs that modify brain chemistry are a good way to change criminal behavior). The assumptions become invisible parts of the way scientists design experiments, interpret data, and apply their findings. The result is ever more elaborate theories — theories that remain self-referential, and unequal to the task of finding solutions to human problems.

All this may go some way toward explaining why the rate of failure of pharmaceutical interventions for Alzheimer’s is so high. When mouse models are used to explore theories of human brain health and behavior, there is no reliable way to assess the validity of the science or the assumptions underlying it. This is not to say that scientists should just start conducting on humans the experiments they now perform on mice. But as Fitzpatrick emphasizes, the huge amount of mouse-brain research now being done is a reflection of the internal dysfunction of the research system, not of the potential for the “free play of free intellects” to help alleviate the human suffering caused by neurological disease and dysfunction.

But Is It Science?

Problems of values, assumptions, and ideology are not limited to neuroscience but are pervasive across the scientific enterprise. Just as Derek Price recognized the threat to science from its unsustainable growth decades before the symptoms became painfully apparent, so was the threat of ideology in science flagged long ago by the physicist Alvin Weinberg. A bona fide member of the military-industrial complex, Weinberg ran the Oak Ridge National Laboratory — originally part of the Manhattan Project — and was a tireless advocate for nuclear energy. Involved as he was in the early political debates over nuclear power, he was concerned about the limits of what science could tell us about complex social and political issues.

In his 1972 article “Science and Trans-Science,” Weinberg observed that society would increasingly be calling upon science to understand and address the complex problems of modernity — many of which, of course, could be traced back to science and technology. But he accompanied this recognition with a much deeper and more powerful insight: that such problems “hang on the answers to questions that can be asked of science and yet which cannot be answered by science.” He called research into such questions “trans-science.” If traditional sciences aim for precise and reliable knowledge about natural phenomena, trans-science pursues realities that are contingent or in flux. The objects and phenomena studied by trans-science — populations, economies, engineered systems — depend on many different things, including the particular conditions under which they are studied at a given time and place, and the choices that researchers make about how to define and study them. This means that the objects and phenomena studied by trans-science are never absolute but instead are variable, imprecise, uncertain — and thus always potentially subject to interpretation and debate.

By contrast, Weinberg argues, natural sciences such as physics and chemistry study objects that can be characterized by a small number of measurable variables. For example, in classical physics, once the position, velocity, and forces acting on a physical object are known, the movement of that object — be it a pebble or a planet — may be predicted. (This is not the case in quantum physics, in which the position and velocity of individual particles can no longer be measured simultaneously with precision. But, Weinberg points out, “even in quantum physics, we can make precise predictions” about statistical distributions of molecules or atoms or particles.) Moreover, the objects of study — whether the mass of an electron, the structure of a molecule, or the energy released by a chemical reaction — can be precisely defined and unambiguously characterized in ways that all scientists can generally agree upon. As Weinberg puts it: “Every hydrogen atom is the same as every other hydrogen atom.”

This combination of predictable behavior and invariant fundamental attributes is what makes the physical sciences so valuable in contributing to technological advance — the electron, the photon, the chemical reaction, the crystalline structure, when confined to the controlled environment of the laboratory or the engineered design of a technology, behaves as it is supposed to behave pretty much all the time.

But many other branches of science study things that cannot be unambiguously characterized and that may not behave predictably even under controlled conditions — things like a cell or a brain, or a particular site in the brain, or a tumor, or a psychological condition. Or a species of bird. Or a toxic waste dump. Or a classroom. Or “the economy.” Or the earth’s climate. Such things may differ from one day to the next, from one place or one person to another. Their behavior cannot be described and predicted by the sorts of general laws that physicists and chemists call upon, since their characteristics are not invariable but rather depend on the context in which they are studied and the way they are defined. Of course scientists work hard to come up with useful ways to characterize the things they study, like using the notion of a species to classify biologically distinct entities, or GDP to define the scale of a nation’s economy, or IQ to measure a person’s intelligence, or biodiversity to assess the health of an ecosystem, or global average atmospheric temperature to assess climate change. Or they use statistics to characterize the behavior of a heterogeneous class of things, for example the rate of accidents of drivers of a certain age, or the incidence of a certain kind of cancer in people with a certain occupation, or the likelihood of a certain type of tumor to metastasize in a mouse or a person. But these ways of naming and describing objects and phenomena always come with a cost — the cost of being at best only an approximation of the complex reality. Thus scientists can breed a strain of mouse that tends to display loss of cognitive function with aging, and the similarities between different mice of that strain may approximate the kind of homogeneity possessed by the objects studied by physics and chemistry. This makes the mouse a useful subject for research. But we must bear the cost of that usefulness: the connection between the phenomena studied in that mouse strain and the more complex phenomena of human diseases, such as Alzheimer’s, is tenuous — or even, as Susan Fitzpatrick worries, nonexistent.

Morgan Ray Schweitzer (morganrayschweitzer.com)

For Weinberg, who wanted to advance the case for civilian nuclear power, calculating the probability of a catastrophic nuclear reactor accident was a prime example of a trans-scientific problem. “Because the probability is so small, there is no practical possibility of determining this failure rate directly — i.e., by building, let us say, 1,000 reactors, operating them for 10,000 years and tabulating their operating histories.” Instead of science, we are left with a mélange of science, engineering, values, assumptions, and ideology. Thus, as Weinberg explains, trans-scientific debate “inevitably weaves back and forth across the boundary between what is and what is not known and knowable.” More than forty years — and three major reactor accidents — later, scientists and advocates, fully armed with data and research results, continue to debate the risks and promise of nuclear power.

To ensure that science does not become completely infected with bias and personal opinion, Weinberg recognized that it would be essential for scientists to “establish what the limits of scientific fact really are, where science ends and trans-science begins.” But doing so would require “the kind of selfless honesty which a scientist or engineer with a position or status to maintain finds hard to exercise.” Moreover, this is “not at all easy since experts will often disagree as to the extent and reliability of their expertise.”

Weinberg’s pleas for “selfless honesty” in drawing the lines of expertise have gone largely unheeded, as scientists have, over the past forty years, generally sought not to distinguish trans-science from science but to try — through what amounts to a modern sort of alchemy — to transmute trans-science into science. In fact, the great thing about trans-science is that you can keep on doing research; you can, as Fitzpatrick says, create “the sense that we’re gaining knowledge when we’re not gaining knowledge,” without getting any closer to a final or useful answer.

Read more at http: www.thenewatlantis.com

Trackback from your site.

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via