Speech Users Speak Out on Newfound Efficiencies
 
 Radiologist Scott Fargher, MD, dictates as he reads PET images on a GE Healthcare Centricity RIS-IC workstation at Radiology and Imaging Specialists’ Lakeland in Florida. The facility recently added M*Modal’s Advanced Speech Understanding speech engine that is embedded in the RIS.
While speech recognition technology has proven its ability to reduce transcription costs and improve report turn-around times, these benefits are clearly becoming old news. Healthcare facilities now demand front-end and back-end systems that do more than just replace dictation and transcription, offering add-on tools that help with overall workflow in a format that is easy to use. After all, the key to speech recognition’s success is its adoption among radiologists and physicians.

Many speech offerings are integrating dictation software into various systems, touting the capability to expedite the radiology workflow and empowering the user with more control. This is due in part to easier to use interfaces, auto texts and templated reports and single worklists from which to streamline workflow. Report creation is quicker as is communication with referring physicians and specialists. Speech recognition is clearly gaining traction across imaging facilities large and small.

Orchestrating from the telerad cockpit

At University Radiology Group in central New Jersey, the reason for adopting speech recognition goes beyond traditional speech capabilities, bringing to the user interface and the ability to consolidate orders from multiple systems into one single platform, says CIO Alberto Goldszal, PhD.  They chose RadWhere for Radiology from Nuance. “The key factor for us was the Workflow Orchestration function to create a single cockpit for dictation,” he says, adding that they have been using the system for 18 months.

Covering six hospitals and eight imaging centers throughout the state, with a wholly owned teleradiology subsidiary performing nighthawk coverage for ERs, University performs approximately 850,000 reads each year. All eight imaging centers use the front-end speech recognition system for remote reading services.

The difficult part about reading for multiple locations is the consolidation of multiple dictation systems, to get a single platform from which to dictate all incoming orders, Goldszal says. From RadWhere, users can open incoming orders from multiple sites while returning reports to the appropriate hospital or facility RIS from a single workstation. Reports can be created in a variety of dictation styles, and data extraction tools can allow for productivity and outcomes analysis.

“With remote reading, it is impractical to use hospital A’s dictation system, hospital B’s and so on, simply because of the incompatibility of these systems since you would have to use a separate dictation workstation for each and establish separate VPN connectivity to those hospitals—which becomes very messy, very fast,” he says.

With RadWhere, all orders from multiple sites are consolidated into a single platform and then speech recognition is used to dictate. “Most worklists are hospital-centric and we wanted a global enterprise-wide view of all the sites we are connected to in a single interface with a single voice recognition system to dictate,” he says.

Adding ‘understanding’ to speech

While many recognize the promise of speech recognition, a primary challenge that remains is compliance, as many radiologists are reluctant to transition to speech or self-edit because they believe it will slow workflow. Many eye speech recognition with skepticism, questioning whether it truly can be an aid or an interruption. Lines have been drawn and people have taken sides, some touting front-end recognition systems as the answer while others prefer back-end recognition systems. And yet, some heads are turning toward a system that is less about speech recognition and more about speech understanding.

At Radiology Imaging Specialists Lakeland in Florida, COO and CIO David Marichal, says that since their speech solution—M*Modal’s Advanced Speech Understanding speech engine—is embedded in the Centricity RIS-IC from GE Healthcare, it makes reporting part of the natural radiologist workflow. “It has to do with the fact that it’s not really forcing the radiologist to change the way he or she has to dictate. From a physician acceptance perspective, it has been very smooth,” he says.

Since radiologists have different types of workflows, there is flexibility for them in choosing how to use the technology—either free-form documentation or structured reporting. GE’S Centricity Precision Reporting uses Advanced Speech Understanding technology to better understand the natural flow of language without extensive training or workflow changes.

“Rads can just talk about what they see in a report instead of using the point-and-click method associated with structured reporting,” Marichal says. “It’s even different than traditional speech recognition’s templates and brackets in which to dictate.”

As radiologists dictate in their preferred style, M*Modal’s speech engine captures an audio file and transforms it into a draft transcript. It codes and structures the information into a Meaningful Clinical Document that is then ready for review by the radiologist or a medical editor. Users can choose to self-edit or send reports to medical editors, depending upon user preference, he adds. Upon approval, the information is encoded into clinical documents that meet the facility’s preferences and is ready to be shared.

“Some [physicians] will self-edit, some will not, but the majority of them are doing a mix,” Marichal says. “It’s really hard to get rads to change their ways, but just the slight changes have been accepted because it doesn’t require them to change their style of dictation, unless they want to.”

“This is a good way to get speech recognition into a facility without rads even realizing that they are really using it,” he concludes. “And in getting radiology’s acceptance of speech recognition technology—that’s a great approach.”

Cure-all for changing rads habits?

Speech recognition or speech understanding, it is clear the future of the technology lies in the hands of those who use it everyday in clinical practice, and Arun Krishnaraj, MD, radiology resident, and colleagues at the University of North Carolina Hospitals in Chapel Hill, say there will always be radiologists who will not change poor work habits.

According to a poster presentation from the November 2008 meeting of the Radiological Society of North America (RSNA), Krishnaraj and colleagues have documented improvement in report turn-around time. Previous studies have not examined the effect of individual work habits on improvement related to speech recognition. They set out to assess the impact of individual work habits by comparing turn-around time before and after speech recognition for a department, for each individual faculty and for each of eight subspecialty sections.

What the researchers found was that implementing speech recognition did result in decreased turn-around time; however, the rank order of individual faculty did not change significantly, suggesting that individual work habits may impact the effectiveness of a new technology.

Other findings also revealed that radiologists with the faster turn-around times learned to use the entire feature set of the speech recognition system and took advantage in order to improve their workflow. Additionally, those who reverted to a legacy dictation system reported slower times on average.

“Recognition of the impact of individual work habits on the effect of productivity-enhancing technology may facilitate the design and implementation of similar technologies in the future,” the authors wrote.
Trimed Popup
Trimed Popup