The American College of Radiology (ACR) recently announced a partnership with the Medical Image Computing and Computer Assistance Intervention (MICCAI) Society to develop artificial intelligence (AI) algorithms focused on radiologists’ clinical needs.
Bibb Allen Jr., MD, chief medical officer with ACR’s Data Science Institute (DSI) talked with HealthImaging about focusing MICCAI's artificial intelligence (AI) challenges on radiologists' clinical needs and reshaping how algorithms are created, verified and implemented.
HealthImaging: What was the thinking behind this partnership?
Bibb Allen Jr.: Being involved with MICCAI gives us an opportunity to affiliate with developer groups. The collaboration came when their leadership mentioned many of their developers were interested in getting involved with healthcare AI—imaging in particular.
MICCAI typically puts on a number of AI challenges at their annual meeting, but many do not have an immediate and direct application to healthcare. We are going to work with them on building some of these challenges around AI use cases that will be applicable to radiologists.
The idea is if challenges are built on use cases, this will provide proof-of-concept and allow the winners to advance the commercialization of their product for clinical integration and pre-market review from ACR, and we can give it to the FDA and monitor the algorithm in global practice.
In a recent JACR editorial, you wrote many developers are working with single radiologists at individual institutions to create specific AI algorithms. Why is that not the best method? How would you like it to change?
If a developer works with a single institution, what we see happening is they’re using that institution’s clinical data—which is typically done with the same protocol and scanners. The algorithm is trained on that data and verified on cases held back from the same data. Implementation is designed to meet a specific need of the institution, which may work perfect there, but will that be generalizable to the rest of the country or world?
We think creating distributed ways to train algorithms, using datasets from multiple institutions and creating a certification process based on the data of multiple institutions is a better way to assure algorithms will be safe and effective in clinical practice.
Developing an algorithm for the specific needs of an institution and its patients seems like a good approach. Why is a broader approach better?
It depends, right? I think locally, institutions may want to develop their own algorithms to solve their own needs. A health system may want a business-analytics algorithm to do certain things for the system. But when the clinical problem is early detection of stroke or classification of lung nodules, these are the types of issues that are important for radiologists all over the country. Developing structured ways to do these will be far superior than having a bunch of one-offs.
In the JACR article, you mentioned prioritizing use cases as a challenge for AI. Can you expand on that and how it will be addressed?
Developers are out there building use cases, and how they learn is based on individual health systems or academic departments. The priority there is what is needed in Birmingham or Johns Hopkins. It’s not about the needs of the broader community. By developing data science panels populated by radiologists they can develop use cases with a much broader lens.
For instance, there was a challenge a year ago around lung cancer detection. The idea was to determine who had cancer and who didn’t. The winners produced an algorithm which provided information such as ‘this person has an 85 percent chance of lung cancer, and this nodule has a 15 percent chance of being cancer.’ Output like that isn’t very useful to a radiologist, because in those two scenarios we’d probably recommend a biopsy anyway. We think having radiologists design use cases will be more effective for helping the developer get their product to market which is ultimately what they’re trying to do.