
It’s not just longform journalism and apoplectic Internet commenters that prompt a “tl;dr” from readers. Research papers trigger it, too, as scientists are so inundated with the volume of new results that just barely keeping up is a struggle. According to University of Oxford psychiatrist Michael Sharpe, “Everyone is deluged with information.”
Research papers give a brief summary of their contents in an information-rich “abstract” of the article—it’s often the only text that’s publicly available from a paywalled journal. Time-pressed researchers may rely on the abstract rather than investing in reading the lengthy paper, but those abstracts are not always reliable. A paper published this week in the journal BMJ Evidence-Based Medicine found that half of the 116 psychiatry and psychology articles they analyzed included some sort of spin that made the results look better than they were.
“These findings raise a major concern,” Sharpe told the Science Media Centre, “especially as readers may draw conclusions on the basis of the abstract alone, without critically appraising the full paper.”
Scientific concealer
Randomized controlled trials (RCTs) are supposed to be conducted to an incredibly exacting standard. Because the stakes are so high, the quality of evidence also needs to be high. Patients are randomly assigned to receive either the treatment being tested or a comparison like a placebo or existing treatment. RCTs are meant to pre-specify exactly what they plan to study and how they will analyze the results, reporting every single thing they find rather than cherrypicking the most flattering outcomes.
Unfortunately, there are still tricks to be played. The abstract and title both offer opportunities for selective reporting that can gloss up a study that didn’t turn out as expected. To assess how common this is, a group of researchers scoured the clinical psychological and psychiatric literature to look for RCTs that turned up nonsignificant results for the main question they set out to study.
There’s some flexibility in how to report results, as research papers often incorporate a collection of related data. For instance, a study on a particular diet might primarily be tracking weight loss and insulin resistance, and researchers would figure out the technicalities of the trial—like how many patients to include—based on those goals. But the trial could also track quality of life as a secondary endpoint, essentially an interesting add-on that doesn’t have quite as much empirical heft as the primary goals.
A trial could also end up with a mix of significant and nonsignificant results, and “nonsignificant” here has a specific meaning. It’s not just that these studies found a small result that probably doesn’t mean much. Rather, their findings weren’t statistically meaningful: any difference between the treatment and control groups could very well be explained by random chance.
Spinning significance
The researchers found 116 papers that fit the bill, and the team then rated their abstracts for spin. The most common tactic was to use the abstract to draw attention to those primary endpoints that turned out significant, but not those that didn’t. It was also common to claim that a treatment was beneficial because a secondary endpoint was significant.
A range of less common tricks all involve some sleight of hand that allowed the report to claim success even though the main goal of the trial didn’t work out. One paper went as far as emphasizing the difference between treatment and control group results—despite the fact that statistical tests made it clear that this difference could easily be explained by just random chance. Overall, 56 percent of the papers in the sample included some kind of spin.
It’s not clear whether clinicians reading the scientific literature are swayed by this kind of spin. The researchers point to evidence that doctors often read just the abstract, and there’s been some experimental work looking at whether spin in abstracts actually sways doctors’ opinions, but that’s produced mixed results.
This analysis focuses on psychology and psychiatry, but there’s no reason to think the problem is limited to these fields. “Trying to cope with [the] deluge by just reading the abstract may be a mistake,” said Sharpe. “Authors, peer reviewers and journal editors all need to pay more attention to the accuracy of titles and abstracts as well as the main report.”
BMJ Evidence-Based Medicine, 2018. DOI: 10.1136/bmjebm-2019-111176 (About DOIs).
https://arstechnica.com/?p=1548989