Over the past decade, it became apparent that a number of fields of research had some issues with replication. Published results didn’t always survive attempts at repeating experiments. The extent of the problem was a matter of debate, so a number of reproducibility projects formed to provide hard numbers. And the results were not great, with most finding that only about half of published studies could be repeated.
These reproducibility projects should have served a couple of purposes. They emphasize the importance of ensuring that results replicate to scientific funders and publishers, who are reluctant to support what could be considered repetitive research. They should encourage researchers to incorporate internal replications into their research plans. And, finally, they should be a caution against relying on research that’s already been shown to have issues with replication.
While there’s some progress on the first two purposes, the last aspect is apparently still problematic, according to two researchers at the University of California, San Diego.
Word does not get out
The researchers behind the new work, Marta Serra-Garcia and Uri Gneezy, started with three large replication projects: one focused on economics, one on psychology, and one on the general sciences. Each project took a collection of published results in the field and attempted to replicate a key experiment from it. And, somewhere around half the time, the replication attempts failed.
That’s not to say that the original publications were wrong or useless. Most publications are built from a collection of experiments rather than a single one, so it’s possible that there’s still valid and useful data in each paper. But, even in that case, the original work should be approached with heightened skepticism; if anyone cites the original work in their own papers, its failure to replicate should probably be mentioned.
Serra-Garcia and Gneezy decided they wanted to find out: are the papers that contain experiments that failed replication still being cited, and if so, is that failure being mentioned?
Answering these questions involved a massive literature search, with the authors hunting down papers that cited the papers that were used in the replication studies and looking at whether those with problems were noted as such. The short answer is that the news is not good. The longer answer is that almost nothing about this research looks good.
The data Serra-Garcia and Gneezy had to work with included a mix of studies that had some replication issues and ones that, at least as far as we know, are still valid. So it was relatively easy to compare the differences in citations for these two groups and see if any trends emerged.
One obvious trend was a massive difference in citations. Those studies with replication issues were cited an average of 153 times more often than those that replicated cleanly. In fact, the better an experiment was replicated, the fewer citations it received. The effect was even larger for papers published in the high-prestige journals Nature and Science.
Missing the obvious
That would be fine if a lot of these references were characterizations of the problems with replication. But they’re not. Only 12 percent of the citations that were made after the paper’s replication problems became known mention the issue at all.
It would be nice to think that only lower-quality papers were citing the ones with replication issues. But that’s apparently not the case. Again, comparing the groups of papers that cited experiments that did or did not replicate yielded no significant difference in the prestige of the journals they were published in. And the two groups of papers ended up getting a similar number of citations.
So, overall, researchers are apparently either unaware of replication issues, or they don’t view them as serious enough to avoid citing the paper. There are plenty of potential contributors here. Unlike retractions, most journals don’t have a way of noting that a publication has a replication issue. And researchers themselves may simply keep a standard list of references in a database manager, rather than rechecking its status (a disturbing number of retracted papers still get citations, so there’s clearly a problem here).
The challenge, however, is figuring out how to correct the replication issue. A number of journals have made efforts to publish replications, and researchers themselves seem much more likely to incorporate replications of their own work in their initial studies. But making everyone both aware of and cautious about results that failed to replicate is a challenge without an obvious solution.
Science Advances, 2021. DOI: 10.1126/sciadv.abd1705 (About DOIs).
https://arstechnica.com/?p=1767246