Algorithms show link between rads’ gaze, diagnostic decision and image content

Machine learning algorithms can model the interaction between radiologists’ gaze, diagnostic decision and image content, according to a study published in the October issue of the Journal of the American Medical Informatics Association.

Biomedical informatics innovation is driven by cognitive science, which helps in designing, developing and properly assessing medical information technologies. Two of the biggest challenges currently facing the field of radiology are diagnostic error and inconsistencies in interpretation of medical images. With the increasing volume and complexity of medical imaging data, radiologists are undergoing visual strain and cognitive fatigue, which in turn raises the risk for medical error.

Georgia Tourassi, MD, of Tennessee’s Oak Ridge National Laboratory, and colleagues recognized the need for understanding individual differences in human perception and cognition of medical imaging data.

“They can provide important insights in the development and successful use of clinical decision support and education support information systems that meet the personal needs of clinicians involved in the interpretation of medical images,” wrote Tourassi and colleagues.

The researchers designed a study in which they examined machine learning for linking image content, human perception, cognition and error in diagnostic interpretation of mammograms. Gaze data and diagnostic decisions were collected from three breast imaging radiologists and three radiology residents who wore head-mounted eye trackers while reviewing 20 screening mammograms.

Image analysis was then performed on mammographic regions that attracted the radiologists’ attention and in all abnormal regions. Machine learning algorithms investigated and developed predictive models that link image content with gaze; image content with gaze cognition; and image content, gaze and cognition with diagnostic error.

The study’s findings indicated that machine learning produced highly accurate predictive models linking image, content, gaze and cognition. Linking with diagnostic error was supported to some extent during the research. Merging gaze metrics and cognitive features identified 59 percent of readers’ diagnostic errors and confirmed 97.3 percent of their correct diagnoses.

Individual perceptual and cognitive behaviors were adequately predicted by modeling the behavior of others. In many situations, personalized tuning was beneficial for encapsulating individual behavior more accurately.

“We believe that these findings encourage a paradigm shift in the way we think and develop computerized decision support systems and computerized education support systems for medical image interpretation,” wrote Tourassi and colleagues. “A personalized approach is a promising way to improve existing systems that are driven only by population-based understanding of the user community.”