CHICAGO—Radiologists looking to improve their reporting chops have now viewed and/or downloaded the templates of best-practice radiology reports posted to radreport.org, the online template library of RSNA’s reporting initiative, some 2.6 million times.
That’s about a 90 percent bloom atop the 106 million transactions reported at last year’s RSNA by Charles Kahn, MD, chair of RSNA’s structured reporting subcommittee, and colleagues.
This past Monday, Kahn, vice-chair of radiology at the University of Pennsylvania, returned to update attendees on the initiative’s recent progress and new directions.
Among the latter is a new effort to build on the existing work through 2016 by developing “common data elements” for radiology, Kahn said.
Defining these as “pieces of information that are collected and stored uniformly across institutions and studies, and are defined in a data dictionary,” Kahn noted that they are carefully predefined—and, up to now, are likely more familiar to academic researchers than clinical practitioners.
“If you were going to craft a research protocol, you would set out by saying, ‘I’m going to have something called hydronephrosis, and this is how I’m going to define it. And the only terms you can use to describe it are none, mild, moderate and severe.’”
Aren’t there already examples of this approach in radiology?
There are indeed, replied Kahn to his own question. “BI-RADS, LI-RADS, PI-RADS—there’s a variety of these, where we have constrained vocabularies that allow us to ask a specific question and use specific, focused terminology,” he said.
At the same time, radiology also already has coded terminology that “goes along with all of that to make it easier to extract this information from our reports once we have created them.”
A basic, zero-to-five assessment score is a very simple way of thinking about how the profession can carry out common data elements and use them to drive more consistent reporting, said Kahn.
“I can tell you, based on personal experience using the coding scheme for abdominal imaging studies, that [set assessment scoring] can have an amazing effect driving people to really think about what it is that they are concluding,” he said.
It can also provide information to the referring physician on whether or not findings are benign or require follow-up.
In this way, getting people to think categorically “takes a lot of the fluff and the hedging out of many of our statements that we make in radiology,” he said. “This actually makes it more meaningful and useful to our referring colleagues, and we are just starting to see the impact of that.”
At another point in his talk, Kahn urged radiologists to ask reporting-system vendors to use the Management of Radiology Report Templates (MRRT) standard. This defines radiology reporting templates using an HTML5-based format and is to be the go-to standard across the RSNA template library.
“Many of the arguments against it are the same ones that vendors had when DICOM first came along,” Kahn said. “Well, why do I need that? I can connect everything within my system perfectly well.”
The answer, he said, is that the standard is “not about connecting within one system; it’s about connecting between systems. Interoperability is the watchword for healthcare information technology.”
Kahn concluded by stressing that the goal of the template library is to help people get to a consistent format.
From this will come the capability “to create reports that referring docs prefer, to improve our efficiency when reporting, to reduce the risk of communication errors, to improve compliance with the various accreditation and certification requirements, and, where possible, to profit or at least not lose the various performance payment incentives in the United States.”