Machine learning can reduce a radiologists workload by lowering the number of screening mammograms they’re required to read while preserving accuracy, according to results of a feasibility study published in the Journal of the American College of Radiology.
Trent Kyono, with the Department of Computer Science, University of California Los Angeles, and colleagues created their autonomous radiologist assistant (AURA)—a modified version of a previous clinical decision support system—to determine if it could diagnose mammograms as negative while maintaining diagnostic accuracy and noting which scans would still need to be read by a radiologist.
“Unlike related works using convolutional neural networks (CNNs) for mammography that attempt to completely automate and replace the radiologist, we explore a more conservative approach,” the researchers wrote.
The study included a dataset of more than 7,000 women (aged 47 to 73) who were recalled for assessment at six U.K. National Health Service Breast Screening Program centers.
A CNN and multitask learning extracted imaging features from the mammograms which simulated a radiologists assessment, nonimaging features and pathologic outcomes. Then, using multiple views, a deep neural network predicted both a diagnosis and recommendation of whether or not a radiologist needed to read the scan.
AURA decreased the number of scans needed for review by 34% in a theoretical diagnostic setting—defined as 15% cancer prevalence—and by almost 91% in a low, 1% prevalence setting—a common percentage in typical screening institutions.
Kyono and colleagues wrote that their system was able to reduce required reads by classifying younger patients and those with lower breast density—known attributes of lower cancer likelihood—as breast cancer negative.
“AURA is distinct from previous machine learning approaches in that it does not seek to replace human intervention, but rather assist radiologists by correctly classifying mammograms from patients with low risk of breast cancer, so that fewer mammograms need to be read by the radiologist,” Kyono et al. wrote. “Indeed, such a hybrid approach overcomes the limitations of previous approaches that have sought to completely replace human expertise but despite early promise have not to date achieved the high specificity needed in this regard to consider their widespread use in screening or diagnostic facilities.”
The researchers cited a number of limitations, including the fact that the overall impact of AURA cannot be completely understood because it was not tested empirically.
Going forward, their method must be validated on larger, population-based imaging datasets before receiving clinical use.
“AURA opens the door for realistic synergistic relationships between radiologist and machine with benefits that surpass those reported in the existing literature and provides methods for artificial intelligence integration that can be integrated into clinical practice in the near term,” the team concluded.