Structured reporting might not necessarily solve quality problems associated with free-text dictation, according to a study published Aug. 25 online in Radiology.
“Structured reporting systems in radiology are conceptually appealing for many reasons but cannot be assumed to have a positive effect on individual report quality; such systems should be specifically tested for effect on report quality before implementation,” the authors wrote.
Lead author Annette J. Johnson, MD, associate professor of radiologic sciences at Wake Forest University Baptist Medical Center, and colleagues evaluated a commercially available software system for structured reporting, eDictation (eDictation, Marlton, N.J.).
They chose this structured reporting system (SRS) for several reasons including:
- The system is flexible enough to incorporate various chosen lexicons and standardized phrases;
- The image report information is stored in a fully coded fashion;
- The system can elicit and codify uncertainty; and
- The system can elicit and codify causal and associational relationships of imaging findings.
The SRS included a new mouse-driven method for structured reporting, with clinical scenario-specific, defined lexicons to prompt participants to answer specific relevant questions about the imaging test. Clicking on a topic such as “size” gave multiple standardized phrases as options that could be chosen. The phrase choices from the drop-down menus would result in complete, standard formatted sentences being created automatically. (For example, the user could choose “small,” “globus pallidus,” “left,” and “infarct,” and the sentence “A small infarct is present in the left globus pallidus” would consistently appear in the report.)
Researchers recruited two groups of radiology residents for the study: a control group (16) and an intervention group (18). Participants were asked to create reports for 25 cases of cranial MR imaging in patients suspected of having a stroke. First-year residents were excluded because of their lack of experience with MRI interpretation.
The 25 cases included a mixture of complexity (ischemia, vasculitis, multiple sclerosis, etc.) and all had confirmed diagnoses.
For phase 1 of the study, both groups reviewed the cases and dictated the reports without the use of templates. For phase 2 of the study, which occurred four months later, both groups again reviewed the cases but only the control group dictated reports free-text style, while the intervention group used the SRS.
Each resident in the intervention group had been given the requisite training on the SRS as defined by the vendor. Support help also was available as needed during the report-generating sessions.
Researchers found no differences in accuracy and completeness of reports during phase 1 when both groups used free-text dictation. In phase 2, they found a statistically significant difference in accuracy scores and completeness, with the SRS users scoring lower and having more incomplete reports than the control group.
Interestingly, participants within the control group increased their accuracy and report completeness between phases 1 and 2, while those within the intervention group did not. In fact, the intervention group suffered decreases in accuracy and completeness. The differences were significant.
Eight participants from the intervention group who answered a post-study opinion survey indicated that they like the concept of an SRS, but that the system used in this study was "overly constraining" and "inefficient."
“Standardization through structuring of reports is a laudable goal at many levels, but our study results suggest that the effect of such systems on the intrinsic quality of reports cannot be presumed to be positive,” the authors concluded.
They suggest that any proposed SRS needs full evaluation before large-scale implementation.
“Proposed schemes for SRS need to focus on key challenges to quality, in addition to agreement on lexicons and defined ontologies. These challenges will likely mean even more work for those involved in efforts such as RadLex (the project of the Radiological Society of North America [RSNA] to create a single medical imaging terminology index) and translation of these efforts into workable dictation systems,” they wrote.