Who sues whom when AI misleads medical diagnosticians?

At some point in the not-distant future, a patient is going to blame someone—perhaps a physician, maybe even a radiologist—for an injurious care decision made, recommended or otherwise nudged by artificial intelligence (AI). Who will be slapped with the suit?

Taking up the riddle, a Quartz writer points out that the undiscoverable algorithmic rationale that usually drives AI’s decision-making—its “black box”—will make it hard to pin down a human.

“Even if it were possible for a technically literate doctor to inspect the process, many AI algorithms are unavailable for review, as they are treated as protected proprietary information,” explains the writer, Robert Hart. “Further still, the data used to train the algorithms is often similarly protected or otherwise publicly unavailable for privacy reasons. This will likely be complicated further as doctors come to rely on AI more and more and it becomes less common to challenge an algorithm’s result.”

If healthcare is to make the most of the opportunities AI presents for improving care quality, Hart adds, “we need to know who will be responsible when something goes wrong.”

He briefly considers several possible targets. Read the piece: 

Dave Pearson

Dave P. has worked in journalism, marketing and public relations for more than 30 years, frequently concentrating on hospitals, healthcare technology and Catholic communications. He has also specialized in fundraising communications, ghostwriting for CEOs of local, national and global charities, nonprofits and foundations.

Trimed Popup
Trimed Popup