AI trained on popular ImageNet dataset falls short during x-ray interpretation tasks

Enhancing artificial intelligence software using a popular deep learning model known as ImageNet does not necessarily translate to better performance on medical imaging tasks, despite long-held assumptions.

As it stands today, many chest x-ray AI platforms are pre-trained on ImageNet models, which were developed by the same Stanford researchers who created CheXNet. Such transfer learning approaches have assumed that pre-training would lead to better chest x-ray interpretations and overall model performance.

A new paper published Jan. 18 on open-server arXiv.org, however, found that is not the case.

Co-author Pranav Rajpurkar, a PhD student at Stanford, and colleagues compared the transfer learning performance and parameter efficiencies of 16 noted convolutional architectures on the CheXpert dataset, which contains some 224,000 x-rays.

They found that new architectures created through ImageNet search may have an overfitting problem and “may not be an appropriate benchmark for selecting architecture for medical imaging tasks.”

The findings are certainly surprising, Rajpurkar explained on Twitter.

“Our study, to the best of our knowledge, contributes the first systematic investigation of the performance and efficiency of ImageNet architectures and weights for chest x-ray interpretation,” Rajpurkar and colleagues explained. “Our investigation and findings may be further validated on other datasets and medical imaging tasks.”

There’s much more to glean from the study that can be accessed for free here.

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.

Trimed Popup
Trimed Popup