The data debate: Patient and vendor perspectives on ethical AI in radiology

Data security has become a serious issue in the U.S., not only for big tech companies like Facebook, but for vendors and institutions looking to use patient imaging information to develop AI platforms.

Last week during SIIM’s annual conference, multiple experts dove into the issue of ethics in AI development, arguing whether vendors have a right to buy patient data to create algorithms or if patients should be the gatekeepers of their own imaging data.

Adam B. Prater, MD, MPH, assistant professor of radiology and imaging sciences at Emory University School of Medicine in Atlanta argued from a pro-business standpoint, saying healthcare needs vendors if radiology is to realize AI’s true potential.

“Healthcare needs vendors to make progress,” he said. “Every other industry has artificial intelligence; we’re super behind and we need vendors for progress. Healthcare can’t solve everything by itself.”

A main concern is de-identifying data, which is harder than it seems, Prater, added. As of now, institutions don’t need patient consent to sell data to third parties, it just needs to be de-identified.

But can vendors be trusted with such a valuable resource? For example, CT scan data of a patient’s head can be reconstructed into a 3D image; if someone gets a hold of that scan it could certainly identify a patient and be sold off or used maliciously. While this concern is “overblown” Prater said, if vendors follow HIPPA, and perhaps add stronger data use agreements for improved protections, vendors should be able to purchase data on an open market.

“We’re (vendors) not out to harm people, we’re actually trying to make AI good and save lives, so this is really about patients,” Prater told the audience.

Following Prater was Patricia Balthazar, MD, a radiology resident at Emory, who argued the patient should be put first when using data to create AI.

“We (clinicians) said that we should involve patients in the decision-making, and we have consents for every procedure,” Balthazar said. “If the patient is getting an imaging study, they should know that you’re (hospitals) selling the image to someone else.”

There is also two sides to AI creation, she said. In-house methods are traditionally done in an academic setting, by doctors trained in healthcare-specific data privacy laws such as HIPPA. Third-parties may not be as well versed in this area.

Additionally, the patient or insurance company is actually paying for the scan, but the healthcare institution ultimately gets paid for selling imaging data which confused Balthazar.

She acknowledged that healthcare doesn’t want to stifle AI’s advancement, so she suggested a few potential solutions to overcome the problems mentioned above.

In-depth data use agreements could become a good way to boil-down how a vendor or third-party will be using patient data. Additionally, a Google-used system which harness data inside a phone to create or fine-tune an algorithm, but does not send data back to the company could be a good model for healthcare.

“As healthcare providers we also have ethical responsibilities, we have to follow the law, but we have to put patients first,” Balthazar said. “We all had to say first, do no harm. When we sell data to third parties things can get tricky and we don’t necessarily know what they’re going to do.”

All presenters acknowledged their argument may not reflect their personal viewpoint, but they took different positions for the purpose of the discussion.

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.

Trimed Popup
Trimed Popup