Breast radiologists had slightly higher diagnostic performances when using artificial intelligence (AI) with no additional reading time required, according to a study published online Nov. 20 in Radiology.
Researchers led by Alejandro Rodriguez-Ruiz, from the department of radiology and nuclear medicine at Radboud University Medical Center in the Netherlands, found overall breast cancer detection improved for all breast density categories, independent of lesion type and vendor image quality, when radiologists utilized AI support systems and, most notably, did not lengthen reading time.
Included in the study were digital mammograms from 240 women (average age 62 years) performed between 2013 and 2017 in the U.S. and Europe. Of the 240 examinations, 100 demonstrated malignant findings, 40 were false-positive and 100 were normal.
All examinations were interpreted by 14 board-certified radiologists—once with and once without the use of AI—who then provided a Breast Imaging Reporting and Data System score and probability of malignancy for each examination.
“AI support provided radiologists with interactive decision support (clicking on a breast region yields a local cancer likelihood score), traditional lesion markers for computer-detected abnormalities, and an examination-based cancer likelihood score,” Rodriguez-Ruiz et al. wrote. “The area under the receiver operating characteristic curve (AUC), specificity and sensitivity, and reading time were compared between conditions by using mixed-models analysis dof variance and generalized linear models for multiple repeated measurements.”
The radiologists slightly improved their detection performance when using AI support, with the average AUC increasing from 0.87 to 0.89, but was overall "very similar," according to the researchers.
Sensitivity and specificity increased with AI support, with sensitivity increasing from 83 percent to 86 percent and specificity from 77 percent to 79 percent. However, reading time per examination with and without AI support was similar (unaided, 146 seconds; supported by AI, 149 seconds), according to the researchers.
In an accompanying editorial, Manisha Bahl, MD, a radiologist at Massachusetts General Hospital in Boston, noted the lack of difference between stand-alone performance of the AI system and the radiologists’ average performance was most striking.
Furthermore, it may suggest the integration of AI systems into routine clinical practice could help radiologists with training and achieve performance benchmarks to improve the quality of screening mammography, according to Bahl.
“The study by Rodríguez-Ruiz and colleagues suggests that AI algorithms are reaching a performance level that is comparable to that of radiologists with regard to cancer detection rates at screening mammography,” Bahl wrote. “The increased recognition that mammograms contain more information than is appreciated by the human eye, coupled with the adaptive learning of neural networks, offers incredible promise for accurate and robust AI-based decision support tools for mammographic screening.”