System looks to compare radiology results with downstream clinical information

A system comparing radiology findings with diagnoses provided by other clinical data sources was recently put to the test in a study published online in the Journal of the American Medical Informatics Association. Early indications are that it passed.

Lead researcher William Hsu, PhD, of Medical Imaging Informatics Group in Los Angeles, and colleagues evaluated their system, which pulls data from electronic health records and examines clinical reports for imaging studies relevant to the diagnosis. They said the goal of their system was “to establish a method for measuring the accuracy of a health system at multiple levels of granularity, from individual radiologists to subspecialty sections, modalities, and entire departments.”

For the study, Hsu and colleagues looked at breast imaging exams and found the system offered precision and recall rates of 84 percent and 92 percent, respectively.

“The approach utilizes downstream diagnostic conclusions captured in the data provided by other departments, such as pathology and surgery, as 'truth' to which earlier diagnoses generated by radiology are compared,” they wrote.

The researchers noted the passage of the Patient Protection and Affordable Care Act and other changes in healthcare delivery and reimbursement have led to more of an emphasis on quality measures. Previous research found errors existed in approximately 4 percent of radiological interpretations reported during daily practice, while variability in interpretations may exceed 45 percent among radiologists.

In this analysis, the researchers’ system was compared with a reference standard of 18,101 breast imaging examinations, which resulted in 301 pathological diagnoses performed in 2010 and 2011. They said they chose the cases because they had already been reviewed and audited.

They found that 84.7 percent of the radiology-pathology matches automatically agreed with the matches defined in the reference data set. They mentioned the primary sources of error were biopsies that occurred outside the 90-day window as defined in their algorithm and biopsies that were not performed at their institution. Further, in fewer than 5 percent of cases, the findings in the pathology report did not match with what was captured in the reference data set.

The researchers noted they are conducting ongoing pilot studies with radiologists and fellows to evaluate the information presented through the dashboard. They are also collecting data to assess how users respond to their scores and determine if the system improved their abilities to discriminate similar cases to the ones found to be discordant.

One limitation of the system, according to the researchers, is that using pathology as the reference diagnosis helps assess specificity but not sensitivity.

“The system cannot assess the accuracy of a diagnosis if the patient does not have a subsequent biopsy, hence overlooking cases where a radiologist may have missed an abnormal finding,” they wrote. “In addition, pathology findings may also inherently have errors, as shown in a recent study that demonstrated a discordance rate of 24.7 percent among three pathologist interpretations with the highest variability in ductal carcinoma in situ and atypia.”

Tim Casey,

Executive Editor

Tim Casey joined TriMed Media Group in 2015 as Executive Editor. For the previous four years, he worked as an editor and writer for HMP Communications, primarily focused on covering managed care issues and reporting from medical and health care conferences. He was also a staff reporter at the Sacramento Bee for more than four years covering professional, college and high school sports. He earned his undergraduate degree in psychology from the University of Notre Dame and his MBA degree from Georgetown University.

Trimed Popup
Trimed Popup