AI’s ‘extra set of eyes’ helps radiologists identify missed intracranial hemorrhage cases

Many experts envision a future in which AI augments radiologists rather than replacing them completely. And a new study gave credence to that notion, demonstrating how the technology can serve as an effective peer-review tool.

Yale Medicine researchers determined that an FDA-approved AI solution could help radiologists diagnose intracranial hemorrhage on CT scans and bring down their overall error rate. Those findings, published Feb. 24 in Academic Radiology, suggest AI applications may have a positive future as quality-assurance tools, and improve patient care overall.

“Our study suggests that an AI solution can successfully be used as an adjunct to the current peer-review process by analyzing images and serving as a real-time second reader for noncontrast head CT scans in detecting ICH, and can thus function as an ‘extra set of eyes’ providing real-time, prospective peer review for radiologists,” said Balaji Rao, MD, with Yale’s Department of Radiology and Biomedical Imaging, and colleagues.

Intracranial hemorrhage is the second leading cause of stroke across the globe. Updated imaging modalities have allowed radiologists to detect more subtle cases of ICH, but have also increased the number of images they must examine to do so. This increased workload, Rao et al. noted, can boost the chance that a physician misses a finding, potentially compromising patient care.

With this in mind, the investigators retrospectively applied a commercial convolutional neural network to analyze noncontrast CT head scans performed across eight of Yale’s affiliated imaging sites. In total, 101 radiologists interpreted more than 6,500 scans (90% were read by attending rads). Of these total CTs, 5,585 were considered to have no intracranial hemorrhage by human readers.

The AI, however, flagged 28 of those scans as containing a brain bleed. In order to confirm the discrepancy between man and machine, three neuroradiologists reviewed these images. They found 16 exams did indeed contain ICH, meaning they were missed by the radiologist.

Overall, radiologists missed only a small percentage of these injuries (1.6%), but the authors still noted the “high value” of the AI tool, “with 57% of the AI tool-identified discrepant cases reflecting clinical ICH ‘misses’ with false-negative interpretations by the radiologist,” the team wrote.

Rao and colleagues pointed out that the goal of their study was not to evaluate the accuracy of the tool, making it hard to determine the AI’s potential clinical impact. They also noted that nearly one-third of the studies earmarked for an injury by the technology were due to calcification or image artifacts. These false-positives were “easily” corrected by the radiologists, however.

“Given the potential impact diagnostic errors can have on patient outcomes, new AI tools and technology that can assist radiologists may be of great value as clinicians strive to continuously decrease error rates and improve patient care,” the authors concluded.

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.