Asking these 4 questions can help molecular imaging experts spot breakthrough AI tools

Artificial intelligence papers have flooded medical imaging publications over recent years. With all its promises, AI tools must be held up to the highest standards for providers to translate findings into real-world results.

And recently, two French imaging oncology experts proposed a checklist to help nuclear medicine specialists ensure and identify robust AI-based evidence within the nuclear medicine community.

As such submissions and publications continue to grow, the pair says their simple four questions can help weed out promises from true advances, they explained March 26 in the Journal of Nuclear Medicine.

“To facilitate the identification of those contributions that might be groundbreaking, we encourage the authors and reviewers of AI‐based manuscripts to carefully consider a simple checklist (the T.R.U.E. checklist) composed of four questions: Is it true? Is it reproducible? Is it useful? Is it explainable?” Irene Buvat, PhD, and Fanny Orlhac, both with the Laboratory of Translational Imaging in Oncology at the University of Paris Saclay, wrote.

1. Is it true? Many AI-based imaging studies remain biased by training populations, data leakage or overfitting, the authors wrote. Researchers should assume this “by default” and look for confounding factors. Control experiments indicate valid findings and must be used whenever possible.

2. Is it reproducible? Despite focused efforts to increase transparency and more frequently share data and models, many tools are too complex to reproduce. Buvat and Orlhac “strongly encourage” authors to describe methods carefully and provide data or coding to test algorithms. Employing experts to validate methods and check materials can ensure the greatest impact.

3. Is it useful? Researchers should ask in what ways, if any, are new findings better than existing methods. Sharing datasets can help create benchmarks for fair comparisons, the authors noted. Analyzing performance must assess the trade-offs between complexity, accuracy and robustness. The added value of AI, however, should always be well-supported.

4. Is it explainable? Solving AI’s “black box” problem is ongoing, yet users may never fully understand many tools due to their sophistication. In specific cases, such as why some patients respond to immunotherapy over others, explainability is needed to further human understanding. Overall, it’s a difficult question to answer, but should be addressed whenever possible. 

Read the entire checklist published in the Journal of Nuclear Medicine here.

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.

Trimed Popup
Trimed Popup