Sep 30, 2015

Science Is Often Flawed. It's Time We Embraced That.

By Julia Belluz / vox.com
Science Is Often Flawed. It's Time We Embraced That.

In his book Derailed, about his fall from academic grace, the Dutch psychologistDiederik Stapel explained his preferred method for manipulating scientific data in detail that would make any nerd's jaw drop:

"I preferred to do it at home, late in the evening... I made myself some tea, put my computer on the table, took my notes from my bag, and used my fountain pen to write down a neat list of research projects and effects I had to produce.... Subsequently I began to enter my own data, row for row, column for column...3, 4, 6, 7, 8, 4, 5, 3, 5, 6, 7, 8, 5, 4, 3, 3, 2. When I was finished, I would do the first analyses. Often, these would not immediately produce the right results. Back to the matrix and alter data. 4, 6, 7, 5, 4, 7, 8, 2, 4, 4, 6, 5, 6, 7, 8, 5, 4. Just as long until all analyses worked out as planned."

 

In 2011, when Stapel was suspended over research fraud allegations, he was a rising star in social psychology at Tilburg University in the Netherlands. He had conducted attention-grabbing experiments on social behavior, looking at, for example, whether litter in an environment encouraged racial stereotyping and discrimination. Yet that paper — and at least 55 others, as well as 10 dissertations written by students he supervised — were built on falsified data.

Stories like Stapel's are what most people think of when they think about how science goes wrong: an unethical researcher methodically defrauding the public.

But outright fraud is just one potential derailment from truth. And it's actually a relatively rare occurrence.

Recently, the conversation about science's wrongness has gone mainstream. You can read, in publications like Vox, the New York Times or the Economist, about how the research process is far from perfect — from the inadequacies of peer review to the fact that many published results simply can't be replicated. The crisis has gotten so bad that the editor of The Lancet medical journal Richard Horton recently lamented, "Much of the scientific literature, perhaps half, may simply be untrue."

When people talk about flaws in science, they're often focusing on medical and life sciences, as Horton is. But that might simply be because these fields are furthest along in auditing their own problems. Many of the structural problems in medical science could well apply to other fields, too.

That science can fail, however, shouldn't come as a surprise to anyone. It's a human construct, after all. And if we simply accepted that science often works imperfectly, we'd be better off. We'd stop considering science a collection of immutable facts. We'd stop assuming every single study has definitive answers that should be trumpeted in over-the-top headlines. Instead, we'd start to appreciate science for what it is: a long and grinding process carried out by fallible humans, involving false starts, dead ends, and, along the way, incorrect and unimportant studies that only grope at the truth, slowly and incrementally.

Acknowledging that fact is the first step toward making science work better for us all.


From study design to dissemination of research, there are dozens of ways science can go off the rails. Many of the scientific studies that are published each year are poorly designed, redundant, or simply useless. Researchers looking into the problem have found that more than half of studies fail to take steps to reduce biases, such as blinding whether people receive treatment or placebo.

In an analysis of 300 clinical research papers about epilepsy — published in 1981, 1991, and 2001 — 71 percent were categorized as having no enduring value. Of those, 55.6 percent were classified as inherently unimportant and 38.8 percent as not new. All told, according to one estimate, about $200 billion — or the equivalent of 85 percent of global spending on research — is routinely wasted on flawed and redundant studies.

After publication, there's the well-documented irreproducibility problem — the fact that researchers often can't validate findings when they go back and run experiments again. Just last month, a team of researcherspublished the findings of a project to replicate 100 of psychology's biggest experiments. They were only able to replicate 39 of the experiments, and one observer — Daniele Fanelli, who studies bias and scientific misconduct at Stanford University in California — told Nature that the reproducibility problem in cancer biology and drug discovery may actually be even more acute. 

Indeed, another review found that researchers at Amgen were unable to reproduce 89 percent of landmark cancer research findings for potential drug targets. (The problem even inspired a satirical publication called the Journal of Irreproducible Results.)

So why aren't these problems caught prior to publication of a study? Consider peer review, in which scientists send their papers to other experts for vetting prior to publication. The idea is that those peers will detect flaws and help improve papers before they are published as journal articles. Peer review won't guarantee that an article is perfect or even accurate, but it's supposed to act as an initial quality-control step.

Yet there are flaws in this traditional "pre-publication" review model: it relies on the goodwill of scientists who are increasingly pressed and may not spend the time required to properly critique a work, it's subject to the biases of a select few, and it's slow – so it's no surprise that peer review sometimes fails. These factors raise the odds that even in the highest-quality journals, mistakes, flaws, and even fraudulent work will make it through. ("Fake peer review" reports are also now a thing.)

And that's not the only way science can go awry. In his seminal paper "Why Most Published Research Findings Are False," Stanford professor John Ioannidis developed a mathematical model to show how broken the research process is. Researchers run badly designed and biased experiments, too often focusing on sensational and unlikely theories instead of ones that are likely to be plausible. That ultimately distorts the evidence base — and what we think we know to be true in fields like health care and medicine.

All of these problems can be further exacerbated through the dissemination of research, when university press offices, journals, or research groups push out their findings for public consumption through press releases and news stories.

One recent British Medical Journal study looked at 462 press releases about human health studies that came from 20 leading UK research universities in 2011. The authors compared these press releases with both the actual studies and the resulting news coverage. What they wanted to find out was how overblown claims got made.

Take, for example, the notion that coffee can prevent cancer. Did that come from the study itself, or from the press release, or was it a figment of the journalist's imagination? The researchers discovered that university press offices were a major source of overhype: more than one-third of press releases contained either exaggerated claims of causation (when the study itself only suggested correlation), unwarranted implications about animal studies for people, or unfounded health advice.

These exaggerated claims then seeped into news coverage. When a press release included actual health advice, 58 percent of the related news articles would do so, too (even if the actual study provided no such advice). When a press release confused correlation with causation, 81 percent of related news articles would, too. And when press releases made unwarranted inferences about animal studies, 86 percent of the journalistic coverage did, too. Therefore, thestudy authors concluded, "The odds of exaggerated news were substantially higher when the press releases issued by the academic institutions were exaggerated."

Worse, the scientists were usually present during the spinning process, the researchers wrote: "Most press releases issued by universities are drafted in dialogue between scientists and press officers and are not released without the approval of scientists and thus most of the responsibility for exaggeration must lie with the scientific authors." 

In another 2012 study, also published in the BMJ, researchers examined press releases from major medical journals and compared them with the newspaper articles generated. They found a direct link between the scientific rigor in the press release and rigor in the related news stories. "High quality press releases issued by medical journals seem to make the quality of associated newspaper stories better," they wrote, "whereas low quality press releases might make them worse."

Meanwhile, it's difficult for many people to access a great deal of scientific research — impeding the free flow of information.

Sometimes the problem manifests rather innocuously: in an analysis of more than a million hyperlinks in research papers published between 1997 and 2012, researchers found that between 13 percent and nearly 25 percent of hyperlinks in the scientific journals they looked at were broken. 

Other times, it's less innocuous. Right now, taxpayers fund a lot of the science that gets done, yet journals charge users ludicrous sums of money to view the finished product. American universities and government groups spend $10 billion each year to access science. The British commentator George Monbiot once compared academic publishers to the media tycoon Rupert Murdoch, concluding that the former were more predatory. "The knowledge monopoly is as unwarranted and anachronistic as the corn laws," he wrote. "Let's throw off these parasitic overlords and liberate the research that belongs to us."

Despite this outrageous setup and all the attention to it over the past 20 years, the status quo is still firmly entrenched, especially when it comes to health research. All of us — physicians, policymakers, journalists, curious patients — can't access many of the latest research findings, unless we fork over a hefty sum or it happens to be published in an open-access journal.

Because of these now well-known problems, it's not unusual to hear statements like those from The Lancet editor Richard Horton that "Much of the scientific literature, perhaps half, may simply be untrue." He continued: "Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness."


On the bright side, these troubles, and the crisis of confidence in science that has transpired, have given rise to an unprecedented push to fix broken systems. An open-data movement — which seeks to share or publish the raw data on which scientific publications are based — has gained traction around the world. So has the open-access movement, which is pushing to put all research findings in the public domain rather than languish behind paywalls.

In recent years, there has also arisen a "post-publication peer review" culture. For example, a new website, PubPeer, allows scientists to comment on each other's articles, critiquing and discussing works anonymously, as soon as they've been published in journals — kind of like a comments section on a news site. This has opened up the space for criticism beyond the traditional peer review process. It has also helped uncover science fraud and weed out problematic studies.

"Meta research" is becoming increasingly prominent and unified across scientific disciplines. Last year, Stanford launched the Meta-Research Innovation Center to bring researchers who work on studying research together in one place. The center is guided by this mission statement: "Identifying and minimizing persistent threats to medical-research quality." These meta-researchers apply the scientific method to study science itself and find out where it falters.

With the growth of research on research has come another important insight: that we need to stop giving too much credence to single studies, and instead rely more on syntheses of many studies, which bring together all findings on a given topic and minimize the biases inherent within each particular study.

 

medical studies

More and more fields are also working on reproducibility projects, like the one we noted in psychology. (Some have even dubbed it a "reproducibility revolution.") Hopefully this revolution will fix some of science's flaws.

In the meantime, recognizing that these flaws are frequent and often inevitable might actually give us a healthier appreciation for how science works — and help us think about more ways to improve it.

Long before it was mainstream to criticize science, Sheila Jasanoff, a Harvard professor, was arguing that science — and scientific facts — are socially constructed, shaped more by power, politics, and culture (the "prevailing paradigm") than by societal need or the pursuit of truth. "Scientific knowledge, in particular, is not a transcendent mirror of reality," she writes in her book States of Knowledge. "It both embeds and is embedded in social practices, identities, norms, conventions, discourses, instruments and institutions — in short, in all the building blocks of what we term the social." In a conversation since, she cautioned, "There is something terribly the matter with projecting an idealistic view of science."

Whether or not you believe in the social constructivist argument, the underlying assumption it makes is one that people too often fail to appreciate about science: it is carried out by people, and people are flawed; therefore, science will, inevitably, be flawed. Or, as Jasanoff puts it, "Science is a human system."

A failure to appreciate how science works, its faults and limitations, breeds mistrust. At a meeting at the National Academy of Sciences this month, health law professor and author Tim Caulfield pointed out that one of the things readers often use against his pro-science arguments is that "science is wrong" anyway, so why bother. In other words, people hear about research misconduct or fraud, see the contradictory studies out there, and conclude that they can't trust science.

Instead, if people saw science as a human construction — the result of a tedious, incremental process that can be imperfect in its pursuit of truth — both science and the public understanding of science would be better off. We could learn to trust science for what it is and avoid misunderstandings around what it is not.

While it may seem that critics like Jasanoff scoff at science, that's not the case. She, for one, has actually made criticizing science her life's work — a testament to her reverence for science and her desire to improve its methods. Over the past 20 years, she's helped build a little-known field called "science and technology studies," or just STS, that is now starting to gain wider prominence. Politics has political science to study its functioning. There are literary studies for fiction and poetry. STS studies science itself — how it's carried out, what it gets right, where it goes wrong, its harms and benefits.

"We need to change what the starting assumption ought to be," Jasanoff explains. "If it's provisionality rather than truth, we need to build in the checks and balances around that." As such efforts —  like the reproducibility projects, or post-publication peer review — gain traction, the scientific community is waking up to that fact. 

Now the rest of us need to.

Rate this article 
Technology & Design
Trending Videos
Israelism (2023)
84 min - When two young American Jews raised to unconditionally love Israel witness the brutal way Israel treats Palestinians, their lives take sharp left turns. They join a movement of young American Jews...
Carl Sagan's Cosmos: A Personal Voyage (1980)
780 min - Astronomer Carl Sagan's landmark 13-part science series takes you on an awe-inspiring cosmic journey to the edge of the Universe and back aboard the spaceship of the imagination. The series was...
How Wolves Change Rivers | George Monbiot
4 min - When wolves were reintroduced to Yellowstone National Park in the United States after being absent nearly 70 years, the most remarkable "trophic cascade" occurred. What is a trophic cascade and...
Trending Articles
The Martin Luther King Jr. You Don't See on TV
Subscribe for $5/mo to Watch over 50 Patron-Exclusive Films

 

Become a Patron. Support Films For Action.

For $5 a month, you'll gain access to over 50 patron-exclusive documentaries while keeping us ad-free and financially independent. We need 350 more Patrons to grow our team in 2024.

Subscribe here

Our 6000+ video library is 99% free, ad-free, and entirely community-funded thanks to our patrons!