Computer-aided detection (CAD) increased radiologist sensitivity for breast cancer 10 percent and produced a concomitant increase in the recall rate, according to an article published in the March issue of American Journal of Roentgenology.
The goal (and promise) of CAD is to improve the effectiveness of screening mammography. To better understand the effectiveness of CAD to detect missed cancers at screening and mimic the clinical experience under controlled conditions, Robert M. Nishikawa, PhD, of the department of radiology at University of Chicago, and colleagues designed an observer study.
The researchers noted previous retrospective studies had shown CAD could detect a malignant lesion a radiologist would miss clinically. However, these studies had not demonstrated that the radiologist could recognize when the computer had detected a malignant lesion.
The study was comprised of 300 cases; 66 cases revealed cancer and three cases had two malignant lesions. All cases had a malignant lesion that was missed clinically but visible retrospectively by at least one of two radiologists. Researchers reviewed previous screening mammograms of patients in whom cancer developed. Eight radiologists were included in the study.
Nishikawa and colleagues assessed the aggressiveness with which a radiologist used CAD by determining the total number of extra recalls made when using CAD and determined the subtlety of a malignant lesion by subtracting the fraction of radiologists who detected the lesion without the assistance of CAD from 1.
The overall sensitivity was 55 percent without CAD and 61 percent with CAD, with 0.6 false detections per CAD image. Radiologists recalled an average of 76 women without CAD and 86 with CAD, a difference that reached statistical significance.
“On average, each radiologist found 3.75 additional malignant lesions when using CAD, but they ignored an average of 9.25 (71 percent) of the correctly flagged malignant lesions that they had overlooked,” wrote Nishikawa and colleagues.
The researchers observed that less-experienced radiologists tended to rely on CAD more often, received more help from CAD and ignored fewer correct prompts.
Nishikawa et al pointed out that CAD is effective because it improves radiologists’ vigilance during image review. However, they added that it can be difficult for radiologists to gain confidence in CAD, which can undermine its effectiveness. That’s because of its high false detection rate and low frequency of detection of a cancerous lesion missed by a radiologist. “If radiologists recognize only one third of the true-positive computer detections, it may take several thousand screening mammograms before the radiologist finds that CAD is useful.”
The authors offered several mechanisms to improve the 9.9 percent increase in sensitivity with CAD.
They explained that computer-aided diagnosis programs are in the development pipeline and could be used to augment CAD, “either to emphasize detected lesions that are likely to be malignant, reducing the chances that a radiologist will ignore the prompt, or not prompt on lesions that are highly likely to be benign or at least to mark them as such.”
Also on the development front are content-based image retrieval systems, which leverage reference library images as a comparative knowledge base for radiologists.
There were several differences between the study and clinical practice, according to the researchers. These are: a higher prevalence of malignancy, a potential difference between the eight readers and general mammographers and all cases in the dataset were clinically missed cancers.