Radiology reports’ ‘lexical characteristics’ can be mined for quality improvement

Your choice of words when conversing can give away your identity within a group—and the same goes for radiologists dictating reports.

In the latter setting, such distinctiveness of expression can help provide feedback, guide quality improvement and fine-tune radiology residency training.

So conclude radiologists James Scott, MD, and Edwin Palmer, MD, of Massachusetts General Hospital and Harvard Medical School, who explored how well automatic lexical analysis can combine with machine learning to objectively characterize radiology reports.

Their study is published in the November edition of the British journal Clinical Radiology.

Scott and Palmer concentrated on four radiologists, quantifying their reports according to 12 lexical parameters.

Among the parameters were scope of vocabulary, use of the passive voice, sentence count and metrics reflecting concreteness, ambivalence, complexity, passivity, embellishment and others.

Statistically comparing each radiologist with the group mean for each parameter, the authors were able to identity verbal outliers in reporting styles.

They used neural-network software to correctly identify the radiologist behind each of 60 unknown reports in the sample, indicating a “robust parametric signature.” 

In their discussion, Scott and Palmer state that their technique may be useful when, for example, the authorship of a medical report is called into question—a not-unlikely scenario in the Digital Age, when reports are well removed from the old ways of signatures and physical deliveries.

They also stress the potential application for improving training and quality via objective feedback.

“The advantages of the analysis proposed here are its relative simplicity, quantifiability and objectivity, the latter perhaps increasing the likelihood that the data will be favorably received as feedback in quality-improvement efforts,” they write. “Time-intensive subjective analyses will remain critical components of report evaluation, but their necessity might be triaged using quicker computer-based methods.”