When it comes to radiologists using computer-aided diagnosis software, establishing an appropriate level of trust with the technology is vital, according to a review published in the February issue of Clinical Radiology.
Assisting radiologists in medical image diagnosis is the primary function of such systems, which use complex image processing programs and artificial intelligence techniques to evaluate and detect abnormalities in images, according to review authors Wiard Jorritsma, a PhD candidate at the University of Gronigen in The Netherlands, and colleagues.
The authors made a distinction between computer-aided detection (CADe) and computer-aided diagnosis (CADx), stating that radiologists perform an unaided reading of an image first and then review any marks made by the CADe system. Alternatively, with the CADx systems, a radiologist identifies suspicious structures on an image and the CADx system evaluates them, either deciding if a structure is malignant or benign, the likelihood of malignancy or a pathological classification.
Jorritsma and colleagues called the combination of a radiologist and a CAD system a diagnostic team, similar to double readings with two radiologists, and cite various studies that claim radiologists and CAD systems can make for effective readings.
“However, the team performance of radiologist and CAD is lower than what might be expected based on the performance of the radiologist and the CAD system in isolation,” the authors wrote, citing an important factor in interactions between humans and artificial intelligence systems—trust.
The authors cite both under-trust and over-trust as potential problems in establishing an effective diagnostic team.
“Too little trust in a useful automated aid can lead to under-reliance, which means that the full potential of the aid is not being used,” Jorritsma and colleagues wrote. “Too much trust in an aid on the other hand can lead to over-reliance, meaning that the aid causes humans to make errors they would not have made without it.”
The authors presented four suggestions to improve trust calibration of the radiologist-CAD team.
The first suggestion was to offer a confidence rating for each decision made by the CAD system.
“Displaying a confidence rating for each mark might facilitate more appropriate trust, because it allows radiologists to adapt their trust in a specific mark to the CAD system's confidence in this mark,” the authors wrote.
A second suggestion includes global rationale—informing radiologists on the mechanisms that determine how CAD systems make decisions and the specific circumstances where they are likely to make errors.
“This approach has the potential to increase trust in CAD, because the negative impact on trust of obvious errors in CAD might be reduced when radiologists understand the cause of these errors,” Jorritsma and team wrote.
The third suggestion included supplying the rationale the CAD system used for each decision.
“Users generally prefer seeing the local rationales compared to no rationales, and providing these rationales has been shown to increase objective and subjective measures of trust,” the researchers wrote.
Finally, the authors suggest that radiologists should be explicitly informed of the CAD system’s past performance to improve trust calibration.
“This could allow radiologists to calibrate their trust for each specific type of lesion and might reduce both disuse and misuse by mitigating the effects of negative and positive CAD experiences for one lesion type on trust in CAD for other lesion types,” they wrote.
Providing radiologists with these sources of information has facilitated more appropriate trust in CAD systems, the authors concluded.
“However, all evidence to date is circumstantial. More research is needed to determine whether the suggested changes truly improve trust calibration and to determine the most effective way of presenting the information to the radiologists,” they wrote.