Automated tool labels 120K brain scans in under 30 minutes, holding ‘enormous’ potential for AI

Imaging experts out of London have developed a new technique to automatically label brain MRI scans that they say has massive implications for artificial intelligence and patient care.

Robust AI tools require tens of thousands of annotated images to achieve top performance, a major constraint hindering widespread adoption.

But this deep learning platform assigned labels to more than 120,000 head scans in under 30 minutes—a task that would typically take years to finish manually, authors explained Thursday in European Radiology.

“By overcoming this bottleneck, we have massively facilitated future deep learning image recognition tasks, and this will almost certainly accelerate the arrival into the clinic of automated brain MRI readers,” senior author Tom Booth, of the King’s College London School of Biomedical Engineering & Imaging Sciences said July 22. “The potential for patient benefit through, ultimately, timely diagnosis, is enormous."

The platform is based on more than 126,000 head MRI exams performed at the King’s College Hospital NHS Foundation Trust between 2008 and 2019. Booth et al. also utilized data from corresponding reports written by 17 neuroradiologists. Reports and data from an outside institution were also included to boost performance.

To ensure its accuracy, the team validated the model by comparing predicted labels to reference-standard report and image labels, and further recorded area under the receiver operatic characteristic curve scores.

The tests yielded positive results, with a slight drop in performance for classifying atrophy, encephalomalacia, and vascular characteristics.

While the hard part may be completed, there are still additional challenges ahead, including performing image recognition tasks and ensuring models can work across various settings and scanners, the authors explained.

But they did make their code and models freely available to others to “ensure that as many people benefit from this work as possible.”

Read the full breakdown here.

Around the web

The difference when using speech-recognition software may be accentuated in certain groups of radiologists, UNC researchers detailed in the Journal of Digital Imaging.

Researchers have used machine learning to track diabetes at the population level.

AI developers have worked with experts in human-computer interaction to design an EHR that shows clinicians all information pertinent to the patient case they’re working on—and only that info.

Trimed Popup
Trimed Popup