AI imaging software highly vulnerable to cyberattacks

Artificial intelligence (AI) systems designed to classify medical images are vulnerable to outside attacks that may be nearly imperceptible to those that created the algorithms, according to a recent article in IEEE Spectrum.

The article is based on a May 21 study that tested deep learning systems with adversarial examples designed to make the AI algorithms misclassify them to test deep learning limitations. Authors performed the study on a diabetic retinopathy model from retinal images, pneumothorax from chest x-rays and melanoma from skin images.

Authors found their attacks were able to fool deep learning systems into misclassifying images up to 100 percent of the time, and the false images were undetectable by humans.

“The most striking thing to me as a researcher crafting these attacks was probably how easy they were to carry out," said study lead author Samuel Finlayson, a computer scientist and biomedical informatician at Harvard Medical School in Boston, to IEEE. "This was in practice a relatively simple process that could easily be automated."

According to the article, computer scientists are currently working to build AI models that are both accurate and secure but have yet to do so. Finlayson also told IEEE Spectrum that basic measures to secure medical infrastructure is a simple step in the right direction.

Read the entire story below:

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.

Trimed Popup
Trimed Popup