Spin doctor: Distorted data common in med journals

Overinterpretation and misreporting of results in diagnostic accuracy studies occurs frequently in journals with high impact factors, which may increase healthcare costs and lead to patient harm. The results of the study were published online Jan. 30 in Radiology. Potential overinterpretation was found in all imaging studies.

Previous research has found that overinterpretation, the distortion or misrepresentation of study results to make interventions look favorable, is common in randomized controlled trials.

Eleanor A. Ochodo, MBChB, MIH, from the department of clinical epidemiology, biostatistics and bioninformatics at University of Amsterdam, and colleagues aimed to determine the frequency of overinterpretation in diagnostic accuracy studies which evaluate tests or markers to identify patients with a target condition.

“The clinical use of tests based on inflated conclusions may cause physicians to make incorrect clinical decisions, thereby compromising patient safety. Exaggerated conclusions could also lead to unnecessary testing and avoidable healthcare costs,” Ochodo et al wrote.

The researchers mined MEDLINE for diagnostic accuracy studies published between January and June 2010 in journals with an impact factor of four or higher. They defined overinterpretation as explicit false-positive interpretation of results and potential overinterpretation as practices that facilitate overinterpretation.

The final analysis focused on 126 studies. Three forms of overinterpretation were extracted. These were:

  • An overly optimistic abstract that only reported the best results or used stronger language than the main text;
  • Favorable conclusions or test recommendations based on selected subgroups; and
  • Discrepancy between the study aim and conclusion.

Potential forms of overinterpretation were:

  • Not stating a test hypothesis;
  • Not reporting a sample size calculation;
  • Not stating or unclearly stating the intended role of the test under evaluation;
  • Not prespecifying groups for subgroup analysis in the methods section;
  • Not prespecifying positivity thresholds of tests;
  • Not stating confidence intervals of accuracy measurements; and
  • Using inappropriate statistical tests.

A total of 31 percent of the studies contained a form of overinterpretation, with an overly optimistic abstract cited as the most frequent offense. Approximately one in five studies had stronger conclusions or test recommendations than the main text.

Among the 53 imaging studies in the larger pool, 30 percent contained a form of overinterpretation.

In terms of potential overinterpretation, 89 percent of studies did not report a sample size calculation and 88 percent did not state a test hypothesis. All of the imaging studies contained a form of potential overinterpretation, reported Ochodo and colleagues.

Overinterpretation may be linked with negative practice implications, according to Ochodo et al. “One of the most important consequences might be that diagnostic accuracy studies with optimistic conclusions may be highly cited leading to a cascade of inflated and questionable evidence in literature. Subsequently, this may translate to the premature adoption of tests in clinical practice.”

The authors recommended that journals emphasize the submission of manuscripts according to the Standards for Reporting of Diagnostic Accuracy reporting guidelines.

Trimed Popup
Trimed Popup