A neural network model can scour electronic medical record (EMR) data and determine if a patient has imaging-specific pulmonary embolism (PE)—a potential remedy for unnecessary CT imaging, reported authors of a multicenter study published in JAMA Network Open.
The machine learning platform—Pulmonary Embolism Result Forecast Model or PERFORM—converts raw EMR data, such as demographics, vital signs, medications and lab tests, into a PE risk score for patients referred for CT imaging. When trained and validated on more than 3,400 patients, PERFORM beat out all other existing PE risk scoring methods, according to Imon Banerjee, PhD, with Stanford University’s Department of Biomedical Data Science and colleagues.
“Systematic attempts to curb unnecessary imaging for PE evaluation have focused on the use of existing predictive PE risk scoring tools, such as Wells or rGeneva, as CDS tools to inform the decision to perform advanced imaging, but in practice have had a disappointing influence on CT imaging yield or use,” the authors wrote. However, they went on to say that their method is different and “might be used as an automated clinical decision-support tool for patients referred for CT PE imaging to improve CT use.”
CT imaging is standard for diagnosing PE. However, over the past two decades, imaging orders have greatly increased while the percentage of scans that show PE have fallen. Some studies report that that number stands at less than 1%.
Compounding the need to avoid improper CT use is the Protecting Access to Medicare Act, which will mandate clinicians consult a CDS tool prior to ordering CT for PE, or clinicians will not receive reimbursement.
The team included 3,397 annotated CTs for PE from 3,214 patients at Stanford University hospitals in their study. Of those, 53% were women with a mean age of 60.53. Another 240 patients from Duke University Medical Center were used to validate the AI method. They tested a few different models, including ElasticNet, artificial neural networks and other machine learning approaches.
Overall, PERFORM was more accurate than three scoring models—Wells, pulmonary embolism rule-out criteria (PERC) and revised Geneva (rGeneva) after being tested in 100 random samples from Stanford and 101 from Duke.
For predicting a positive PE CT study, PERFORM achieved an area under the curve score (AUC) of 0.81 in both the Stanford and Duke patients. For comparison, the next closest was the ElasticNet model which achieved an AUC of 0.73 and 0.74 in the Stanford and Duke holdouts, respectively.
Using conservative cutoff scores, PERFORM would have help avoided 67 of 340 studies at Stanford and 147 of 244 studies at Duke, bolstering the positive CT yield by 78% and 40.2% respectively, the authors wrote.
“The neural network model PERFORM possibly can consider multitudes of patient-specific risk factors and dependencies in retrospective structured EMR data to arrive at an imaging-specific PE likelihood recommendation and may accurately be generalized to new population distributions,” Banerjee and colleagues concluded.