Stanford researchers have developed a deep-learning neural network model that can determine the bone age of children from a hand radiograph about as accurately as both an expert radiologist and an existing software package that uses a feature-extraction approach and has been cleared for clinical in use in Europe.
Reporting their work in Radiology, the study authors suggest their model might be applicable to other fairly simple image-interpretation tasks across radiology.
Lead author David Larson, MD, MBA, senior author Curtis Langlotz, MD, PhD, and colleagues trained and validated the model with more than 14,000 hand x-rays, along with the corresponding radiology reports, from two children’s hospitals.
They tested the model against the expert rads and the European software using two measures.
One used age estimates from 200 radiology reports plus those from three additional expert readers, and this served as the reference standard.
The other used around 1,400 exams in the publicly available Digital Hand Atlas, and the researchers compared these with published reports applying the European software.
They found the mean difference between bone age estimates of the model and of the reviewers was 0 years, with a root mean square of 0.63 years and a mean absolute difference of 0.50 years.
Further, the estimates of the model, the clinical report and the three reviewers were within the 95 percent limits of agreement.
“Our results suggest potential broad applicability of deep-learning models for a variety of diagnostic imaging tasks without requiring specialized subject matter knowledge or image-specific software engineering,” Larson and colleagues comment. “Specifically, machine learning models developed for other vision tasks … may also be generalized to tasks in the medical domain.”
Qualifying their results, the authors stress that automated assessment of bone age probably ranks among the easiest applications for deep learning in medical imaging.
“Although our results are encouraging for application of deep learning in medical images, they do not necessarily indicate how successful such applications will be when applied to more complex and nuanced imaging tasks,” they write.
Nevertheless, in a time when all eyes are peeled for the next big thing involving artificial intelligence—not least in radiology—the gist of their conclusion warrants attention:
“A deep learning-based automated software application with accuracy similar to that of a radiologist could be made available for clinical use.”