Follow-up recommendations in radiology reports commonly contain little standardization. Machine learning and deep learning (DL) methods are each effective for deciphering reports and may provide the foundation for real-time recommendation extraction, according to a recent study in the Journal of the American College of Radiology.
“Radiology reports with follow-up recommendations are difficult to identify, in part because of their free-text nature and their lack of standardized structure and content,” wrote first author Emmanuel Carrodeguas, MD, of Harvard Medical School in Boston, and colleagues. “Therefore, automated identification of reports containing follow-up recommendations would constitute a powerful tool for research and quality improvement and provide opportunities to ensure and track appropriate follow-up for a broad range of clinical applications.”
After selecting and annotating 1,000 reports for follow-up recommendations from a large academic medical center, Carrodeguas et al. trained a traditional machine learning (TML) and a DL algorithm on 850 reports.
Assessing for the presence of follow-up recommendations, they found 12.7 percent of reports contained such information. The researchers also noted many machine learning approaches require databases with thousands of images to train algorithms, but the stable results achieved with less training data in their study demonstrated TML and DL as “feasible” for assessing follow-up recommendations in radiology reports, they wrote.
Their TML algorithms achieved the following F1 scores on test data: random forest model scored a 0.75, 0.83 for logistic regression model and 0.85 for support vector machine method. The deep learning method notched an F1 score of 0.71.
“Machine learning models seem to be useful tools, capable of detecting follow-up recommendations with minimal training data in a way that may be equally applied to retrospective analyses or potentially to near real-time monitoring of radiology reports,” the authors wrote.
In the realm of population health, Carrodeguas et al. believe an integrated system built on their work could validate recommendations against agreed upon standards and provide real-time feedback to ultimately enable quality improvement and monitor disease-specific findings.