3 reasons radiologists shouldn’t sweat deep learning

The human-level success of deep learning has made some in medicine question whether automation may eventually take over many tasks performed by radiologists. An author, and radiologist, put that question to bed in an April 18 editorial published in the Journal of the American College of Radiology.

“With this in mind, it is clear why casual observers believe that our days are numbered: the computer can do everything we do,” wrote Alex Bratt, MD, the department of radiology at Stanford University School of Medicine. “This happens to be incorrect, but the reasons why are not immediately obvious.”

Bratt went on to make those reasons obvious, providing three examples of why radiologists need not fear deep neural networks (DNNs).

Deep learning has a limited capacity

Unlike humans, who can integrate clinical notes, lab values, prior imaging and more, neural networks are limited by the size and shape of inputs they can accept, and most operate using 2D images, Bratt noted. 

The root of the problem is in long-term dependencies. For example, Bratt wrote, humans can read a novel and recognize a character mentioned in chapter 1 even if that character is not in the book again until chapter 10, creating a cohesive model from a large block of text.

In radiology, integration is executed using large swaths of text and pixels. DNNs haven’t come close to achieving the human-level performance exemplified in Bratt's example.

“Ask yourself what fraction of the work you do could be performed safely with access to only a single image or a small contiguous set of images without access to clinical information or prior examinations. I’m guessing it’s pretty small,” Bratt wrote.

DNNs are 'brittle'

It’s easier to train networks to classify such things as street signs, but even a slight change, such as a piece of tape on top of a sign can cause a classification failure.

In radiology, applying images from different institutions have proven to be a challenge that typically results in a drop in performance.

“This again reflects the fact that ostensibly trivial, even imperceptible, changes in input can cause catastrophic failure of DNNs, which limits the viability of these models in real-world mission-critical settings such as clinical medicine,” Bratt argued.

They need large amounts of data

DNNs need large amounts of data, sometimes millions of images. And humans, unlike algorithms, can use abstract reasoning rather than visual pattern recognition, with far fewer inputs.

This may especially come into play for rare diseases, Bratt wrote.

“For example, an emergency department radiologist may never have seen a case of left upper quadrant appendicitis in a patient with gut malrotation, but she would likely find this fairly trivial to recognize,” according to Bratt. “Although these types of cases are relatively straightforward for humans, there may be simply too few to train DNNs with sufficient performance.”

“To be clear, I welcome general artificial intelligence with open arms, because it will generate unprecedented prosperity for the human race just as automation has for centuries, “Bratt concluded. “As radiologists, it behooves us to educate ourselves so that we can cut through the hype and harness the very real power of deep learning as it exists today, even with its substantial limitations.”

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.

Trimed Popup
Trimed Popup