New AI tool may be a powerful force in cancer care

A deep learning platform created by researchers at the Dana-Farber Cancer Institute can identify cancer in radiology reports as well as clinicians, but in a fraction of the time, according to new research published July 25 in JAMA Oncology.

The model was trained on more than 14,000 imaging reports from 1,112 patients with lung cancer, manually read by human reviewers. After it was applied to another 15,000 reports, the algorithm predicted overall survival with similar accuracy to that of human assessments.

And once the model is fully trained, it could annotate imaging reports for the more than 2,000 patients included in the study in about 10 minutes, according first author Kenneth l. Kehl, MD, MPH, and colleagues. That is compared to an estimated six months needed for one curator to do the same.

“By reducing the time and expense necessary to review medical records, this technique could substantially accelerate efforts to use real-world data from all patients with cancer to generate evidence regarding effectiveness of treatment approaches and guide decision support,” Kehl, with the division of population sciences at Dana-Farber in Boston, and colleagues added.

Electronic health records (EHRs) contain mounds of information that can improve cancer care, but it remains largely unstructured. At Dana-Farber, researchers created a structured framework for curating clinical outcomes among patients with solid tumors using medical records data called PRISSMM. Even with such a framework, the authors noted, curating medical records is labor and resource intensive.

Kehl and colleagues used the PRISSMM framework to manually review imaging reports from patients who underwent tumor genotyping for lung cancer and participated in the Dana-Farber Cancer Institute PROFILE study from June 26, 2013 to July 2, 2018. They noted if cancer was present, and if it was, whether it was worsening or improving and if it had spread to distinct areas within the body; the deep learning model was trained to extract these outcomes as well.

In this test set, the model performed similarly to humans, achieving an area under the receiver operating characteristic curve (AUC) score of 0.90.

The team also applied the algorithm to another 15,000 reports from 1,294 patients that had not been manually reviewed. Deep learning predicted overall survival with similar accuracy to patient data manually reviewed by humans.

“Automated collection of clinically relevant, real-world cancer outcomes from unstructured EHRs appears to be feasible, the researchers concluded.

Key next steps will include testing this approach in other health care systems and clinical contexts and applying it to evaluate associations among tumor profiles, therapeutic exposures, and oncologic outcomes.”

In a related editorial, Andrew Daniel Trister, MD, PhD, with the department of radiation medicine, Oregon Health & Science University in Portland, keyed in on the study’s use of local-interpretable model-agnostic explanation, and how the method can help address a common problem encountered in many algorithms.

“By highlighting elements of the data, the local-interpretable model-agnostic explanation provides the end user an opportunity to begin to evaluate how the algorithm made a specific determination,” Trister wrote. “This additional layer of data provides a check against the black-box nature of the algorithm and should be standard in solutions for clinical decision support.”

""

Matt joined Chicago’s TriMed team in 2018 covering all areas of health imaging after two years reporting on the hospital field. He holds a bachelor’s in English from UIC, and enjoys a good cup of coffee and an interesting documentary.

Trimed Popup
Trimed Popup