The Lake Wobegon effect + mammo: Rads rate performance higher than it is

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon
Gold Trophy - 19.73 Kb

Many U.S. mammographers believe their performance is better than it actually is and at least as good as their colleagues, according to survey results published in the September issue of American Journal of Roentgenology.

As quality improvement initiatives focus on measures of clinical performance, physicians need to understand their own performance. A previous survey of U.S. radiologists had suggested a disconnect between radiologists’ estimates of recommendations for further evaluation following screening mammography and the actual rates.

Andrea J. Cook, PhD, from the department of biostatistics at the University of Washington in Seattle, and colleagues sought to determine whether U.S. radiologists accurately estimate their interpretive performance of screening mammography and to assess how they compare their performance with that of their peers.

A total of 174 radiologists from six Breast Cancer Surveillance Consortium registries responded to a mailed survey between 2005 and 2006. The survey focused on estimated and actual recall, false-positive and cancer detection rates and positive predictive value (PPV 2) of biopsy recommendation. Radiologists rated their own performance as lower than, similar to and higher than that of their peers, and these rates were compared with actual performance.

“Most radiologists accurately estimated their recall (78 percent) and cancer detection (74 percent), but only 19 percent and 26 percent accurately estimated their false-positive and PPV 2 rates,” wrote Cook et al.

However, many radiologists did not respond to the question about their false-positive rates, which was the most common reason for the low accuracy of the false-positive assessment.

The researchers noted some discrepancies between radiologists' volume and their ability to accurately gauge performance. Specifically, 50 percent of radiologists who interpreted 1,000 or fewer mammograms annually accurately estimated their recall rate. The corresponding rates increased to 73 percent and 85 percent of radiologists who interpreted 1,001 to 2,000 and more than 2,000 mammograms annually, respectively.

In addition, mammographers who never or rarely referred to numbers or statistics in discussions with patients tended to be less accurate in estimating their own cancer detection rate compared with those who sometimes or often or always used such data.

Other self-assessment misses included:

  • 19.4 percent of all radiologists underestimated their actual recall rate;
  • 34.2 percent of all radiologists overestimated their false-positive rate; and
  • 50 percent of all radiologists overestimated PPV 2.

“Radiologists in general perceived their screening performance as equal to or better than that of others,” wrote Cook and colleagues. Forty-three percent rated their recall rate as similar to other radiologists, and 31 percent perceived their recall rate as lower than their peers. In addition, 52 percent perceived their false-positive rate as similar to their peers, and 33 percent classified their rate as lower than their peers.

“Many radiologists perceive themselves as having better interpretative performance than they actually do.” This information is essential to help radiologists understand if and what type of improvement is needed.

An assessment of one’s own false-positive rate proved particularly perplexing as only 28 percent of radiologists provided an accurate estimate. “It will be difficult to motivate radiologists to reduce their own false-positive rates (while maintaining sensitivity) if they do not understand what their false-positive rates currently are, how they are calculated, and how they compare with the rates of their peers.”

The researchers suggested several strategies to help radiologists better understand their own performance. These include a standardized facility and physician auditing form, development of a comparative website that includes interpretive performance measures of U.S. and international radiologists and continuing medical education focused on the Mammography Quality Standards Act.

Cook and colleagues emphasized the need for future studies of strategies to improve audit feedback and radiologist education initiatives, and encouraged physicians to participate in the American College of Radiology National Mammography Database.