Making peer review work for radiology

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon
 - Doctors Reviewing Data

Radiology must embrace the practice of peer review, and such initiatives would be more beneficial to both patients and imaging professionals if the focus was on performance improvement for the entire profession as opposed to measurement and error identification, according to an article published in the May issue of the Journal of the American College of Radiology.

In order to ensure peer review programs have this focus, radiologists must own the process, according to Gregory J. Butler, MD, of Dalhousie University, Halifax, Nova Scotia, Canada, and R. Forghani, MD, PhD, of McGill University in Montreal.

“We will need to continue a strategy of clever proactivity to take possession of this important element before it takes possession of us. … As radiologists, we are best equipped to do so, as we are the only people on the planet who really understand what it is we do and how we can do it better,” wrote the authors.

Butler and Forghani explained that peer review has two objectives: data collection and performance improvement through feedback. Data collection has traditionally been the focus of peer review programs, and this information has the potential to identify areas of intervention or practitioners who require remedial education. However, a system that focuses primarily on data collection might not be effective as it could lead to a lack of engagement by radiologists or result in incorrect conclusions.

With regard to performance improvement, the authors outlined a few essential principles, including:

  • Peer review must include appropriate feedback to a radiologist whose work had an agreed-upon discrepancy;
  • Errors or discrepancies should be reported to the enterprise as a whole; and
  • Measurements should indicate how the enterprise as a whole is performing over time.

The underlying idea behind this form of peer review is that radiologists can learn from one another and from the mistakes of their peers. An important component of this should be a certain level of protection for the radiologists involved, argued Butler and Forghani. Data on discrepancies should not be discoverable for the purposes of legal proceedings, licensing, credentialing or government scrutiny, except in cases where a patient’s safety is an immediate concern. Anonymized error reports can help those within an enterprise learn and see how their performance compares with others, while maintaining professional autonomy.

“In addition to the key goal of education, peer review can, if designed appropriately, have a real-time impact on patient care,” wrote the authors. “To do this, we must abandon the outmoded concept of retrospective (reactive) tracking of errors. In today's new world of quality, prospective (proactive) peer review can identify mistakes before they do harm, by informing a radiologist of a potential error before it is too late to undo the mistake.” This is achieved through a system of double reads, where a colleague offers his opinion on an interpretation.

A major hurdle to peer review is the budget. “Quality costs, in dollars, emotional commitment, time, and fear of risk,” wrote Butler and Forghani. But they argue that costs in the long term can be reduced through properly designed programs.