AJR: Chest x-ray CAD not ready for universal adoption
The ultimate clinical value of CAD systems is “closely tied to the performance of the readers using them,” explained Moulay Meziane, MD, of the Imaging Institute at Cleveland Clinic Foundation, in Cleveland, and colleagues.
Meziane and colleagues designed a retrospective study to compare the effect of chest x-ray CAD on the follow-up recommendations of chest and general radiologists and pulmonologists, measuring the change in decision-making regarding additional testing after readers reviewed images with CAD.
The researchers recruited six general radiologists, six chest radiologists and six pulmonologists to read 200 chest x-ray studies of patients at high risk for lung cancer with and without CAD.
A chest radiologist with 30 years’ experience established ground truth and selected 100 exams with a range of typical and subtle actionable nodules, defined as malignant at pathology or with malignant behavior at follow-up, and 100 studies without actionable nodules. All studies were acquired between August 2003 and April 2007.
The 18 readers independently interpreted each study, reading the exam without CAD on the first reading and then with CAD on the second. The study assessed two versions of CAD, the FDA-approved Riverain RapidScreen 1.1. and a second version, OnGuard 3.0, which has not yet secured FDA approval. The two versions were randomly applied to x-ray datasets.
During the initial reading, physicians recorded the size and location of the lesion and rated diagnostic confidence that the lesion represented a nodule and that the nodule was actionable on a 1 to 10 scale. They also offered their follow-up recommendations for each patient.
After readers reviewed the images with CAD, researchers compared the results with the ground truth results. Chest radiologists outperformed general radiologists (0.68 vs. 0.65 without CAD and 0.70 vs. 0.64 with CAD) and both types of radiologists outperformed pulmonologists.
“[P]ulmonologists had significantly lower performances than the chest and general radiologists and were affected differently by CAD,” wrote Meziane et al. “Pulmonologists had an average follow-up rate of 0.46 without CAD that increased significantly with CAD [to 0.52].” In contrast, radiologists had average follow-up rates of 0.26 without CAD, 0.25 with RapidScreen and 0.26 with OnGuard.
Overall, chest and general radiologists recommended unnecessary follow-up in approximately one-quarter of cases, offered the researchers. The corresponding rate for pulmonologists was nearly double the radiologists’ rate. Moreover, pulmonologists' follow-up rates for patients without actionable nodules spiked with the application of CAD marks, whereas radiologists readily dismissed false-positive marks.
Although CAD has the potential to detect a large number of overlooked lesions, physicians in this study did not improve their follow-up rates with CAD, a result that contradicts previous studies indicating an increase in sensitivity with CAD.
Meziane and colleagues proposed several reasons for their results, noting that their design required readers to locate their unaided findings on CAD images. They also hypothesized that the readers, who had no prior experience with CAD, may have anticipated a large number of false-positives, impacting their evaluation of true-positive marks.
“In conclusion, there is potential for an increase in inappropriate recommendations for additional testing using CAD because of the tendency for nonradiologist clinicians to believe some of the CAD-detected false-positives. Furthermore, we found no significant improvement in lung cancer detection for general and chest radiologists using CAD. Until CAD reaches a high level of accuracy with a minimum number of false-positives, it should not be universally adopted; instead, use should be based on the individual preferences of readers who see an objective improvement in their performance,” wrote Meziane et al.