The Voice of Reason: Users Speak Out on Tips to Implementing Voice Recognition

hiit040304.jpgYes, adding voice recognition technology can save healthcare organizations a significant amount on transcription costs. However, physicians who are successfully using voice solutions say that the best reason to use voice is to help radiologists provide better service.

Add voice recognition “to allow radiologists to take ownership of their reports,” says John Floyd, MD, of Radiology Consultants of Iowa in Cedar Rapids. The practice plus a network of rural hospitals and an outpatient imaging center has been using MedQuist’s SpeechQ for Radiology for several months. The technology means the radiologist has control over the content and accuracy of a report as well as when it gets delivered to other care providers. “The most impressive thing it can do is get your reports into the hands of clinicians dramatically faster,” he says. And for many physicians trying to stay ahead in a competitive radiology market, that rapid turnaround is crucial.

Floyd also identifies other benefits of voice recognition that have little do with the bottom line. One example is piece of mind since he knows that all his work is done when he leaves his office at the end of a day. The reports are done so there are no questions and no reason for his partners to call him to follow up. Plus, radiologists within a healthcare organization with a geographically scattered PACS can get anywhere, anytime access. That’s another reason to add voice, says Floyd. “Probably the last reason to do it is to save money.”


Customer service is the key



Andrew Litt, MD, vice chairman of radiology at New York University Medical Center, agrees that money isn’t the object. “People initially think of the ability to save on transcription [costs]. That’s useful and important, but the key thing is customer service.” Litt says that the ability to have a report ready the minute he finishes looking at the images and to get that report to another physician via any method means that he is providing the best service he can. “That’s what we have to do if we want to be competitive as radiology providers.” And, if getting those reports to other physicians more quickly means a patient can leave the hospital or begin treatment sooner, Litt is satisfied.

He also appreciates the reduction in the number of calls he now gets from other physicians. They have stopped calling looking for reports – which had meant lost time for Litt and other radiologists having to look up studies to recall their findings.

Litt’s radiology department recently began implementing Commissure’s RadWhere voice recognition system, replacing another vendor’s solution. He’s been impressed with gains in the accuracy of the actual recognition as well as new features available. New York University Medical Center is one of only three initial installs for the small company but that doesn’t concern Litt. “Even though the company is new, their people have been doing this for a long time. Nobody knows how to do this better,” he says. He also cites the fact that working with a small company means avoiding a bureaucracy. “We’re dealing with all the key people all the time.”

Since the department had already been using voice recognition for several years, implementation of the basic system has been relatively easy for the 150 staff members. The department is divided by body sections so a couple of groups got started with it first. “Changing over 150 people at once was too big of a challenge,” Litt says. Implementing the department in thirds gives everyone time to get familiar with the system and address issues such as any problems recognizing certain words and terms specific to a group. The second third of employees began using the system on February 1st and the last third will begin on March 1st. Litt expects the department to roll out RadWhere’s advanced features, such as templates and macros, over the next nine months.


Leaving the physicians out


Borgess Medical Center took a different tack in implementing speech recognition. Rather than seeking a physician champion for the technology, the goal was minimal impact on physicians. The plan at the 424-bed teaching hospital in Kalamazoo, Mich., was to increase the productivity of the transcription department, increase their turnaround time and reduce costs for outsourced transcription.

The transcription department and the IT team worked together to evaluate backend systems. They decided to implement Dolbey and Company’s Fusion Text powered by SpeechMagic from Philips after seeing it in action at several other facilities.

The team set up Borgess as a beta test site in April 2003. Dolbey and Company worked with them through weekly calls and representatives from Philips were also involved with at least two onsite visits both during the beta testing phase and after the product went live in October 2003. Thanks to all the preparation, “there was minimal productivity loss even initially and within approximately two weeks, the staff had reached their prior productivity levels,” says Julie Lux, applications analyst at Borgess. “By the end of the first month we started to see increases. By three months, we had attained our goal of a more than 20 percent increase in overall departmental productivity.”

Borgess reduced its outsourced transcription costs by $212,000 in six months and dropped turnaround time from 24 to 96 hours to 4 to 72 hours. Those results led the facility to win an award from Speech Technology magazine for “the most innovative speech recognition solution” in 2005.

Since the end-users at Borgess are its transcriptionists, the implementation process involved reassuring them about the new technology. The transcriptionists “were concerned about job loss, individual style and a complete change in their job descriptions as transcriptionists versus editors,” says Lux. “We explained that this was a tool and not a replacement of their skills.” Once speech was implemented, Lux says they enjoyed doing the speech-recognized reports and saw increased production.


Cutting repeat work, excessive calls


Eliminating redundancy was a primary driver for the move to speech recognition for Gerald Roth, MD, one of 16 radiologists at Henry Mayo Newhall Memorial Hospital in Valencia, Calif. He wanted to reduce report turnaround time because, until a report was transcribed, he and his colleagues often got numerous requests for information about report content. That’s a potential problem because as each radiologist reviews images, he or she could interpret them differently. One weekend — fortunately one that Roth was the radiologist on call — he got the same request three times. At least in that case, other radiologists didn’t have to go out of their way to look at the images and provide feedback.

“The phone used to ring off the hook,” he says. Callers often asked about procedures done up to three days earlier. Plus, Roth and his colleagues had no way to know who dictated which study and when.

Last July, the facility began implementing PowerScribe from Dictaphone after running the system in a test environment for one month. Having that time to learn the system worked out well — Roth says the facility’s CIO said the implementation was the smoothest in her 10-year tenure. He commends the IT staff for their support interfacing all HL7 messaging between different computer systems. In fact, Roth says a good working relationship with your IT staff is required for a successful implementation. He has experienced two implementations of the exact same program at two different facilities. How well it goes “is all in how you set it up, customize the system and change your workflow. If you don’t do it right, implementation of a new system can upset everyone.”

Roth and his colleagues decided on a drop-dead go-live date and everyone is “unofficially” required to use the system. And almost everyone is using it because they see the benefits. “We now get almost zero calls. We’ve seen a 90 percent decrease in requests for information about procedures already done. I think it’s to everyone’s benefit to get on board [with speech]. The turnaround time is so superior and reports get on charts in a more expeditious fashion. We are not redoing old work.”


A phased-in approach


A drop dead go-live implementation may work for a group of 16 radiologists, but Floyd’s practice, Radiology Consultants of Iowa, started with a network of rural hospitals and an outpatient imaging center. The two major hospitals in Cedar Rapids will move to voice recognition over the next year.

The implementation was designed to be in line with moving the non-hospital imaging environment to a new PACS environment. “We wanted that environment to be totally paperless,” Floyd says, in addition to the ability to access and dictate anywhere, anytime. “Voice recognition was a critical part of that.” A small committee reviewed the available products and chose best-in-breed for both PACS and voice after several site visits. The group chose a Stentor PACS and MedQuist’s voice solution. Philips already owned 70 percent of MedQuist and bought out Stentor last year, so Floyd ended up with an integrated product from one vendor.

From the start, Floyd encouraged all the radiologists to self-edit their reports, but no one is required to. “We made no demands. We didn’t throw everybody into the deep end. We are preferring to allow slow adoption.” At this point, only three or four people are using a transcriptionist to correct reports on any given day. Floyd has seen that self-editing leads to reports that are shorter, more succinct, and better organized.

Floyd is happy with the results. He says SpeechQ offers the highest accuracy on the market, plus the system is tightly integrated into the PACS. That’s important, he says, “because you want to have it become part of the PACS rather than a separate product tacked on to the PACS.” That has allowed the desired result of the ability to read and view images from anywhere in the network’s five locations.

Another action Floyd has found that helps with accuracy is having everyone highlight incorrect text and type in the right words. The other option is to verbally repeat the unrecognized words in a manner that a radiologist wouldn’t typically say something. “The system learns better so long as you say what you intended to say. If you correct verbally, you’re saying words in a way you wouldn’t normally. You want the system to recognize terms in the context of a sentence. Highlighting and replacing text allows it to learn more efficiently and improves accuracy.”


Expect varying degrees of success


Despite the success for most users, there are a few for whom the transition to using voice recognition software has not been smooth, Floyd says. “Three people were extremely concerned over the impact of the technology on their productivity. These were good people who had some of the highest productivity in the group so we cannot discount those issues.” Those users send more reports to transcription, but overall, Floyd says the system is working. “The proof is in the pudding. When you look at a list of studies done with our PACS, you can display which have been dictated and which have reports attached and available. Any time you call up that list, 95 percent of the studies that have been done are dictated.”

Despite these good results, Floyd says expectations for voice recognition shouldn’t be too high or too low. He does not think that those in his group struggling with it aren’t trying or have accents or speech patterns that don’t work with the technology. He believes that some people have a talent for it the same way that some people are good at painting or singing. “I think that’s why we’ve seen such different results.”


Take advantage of vendor expertise, user feedback


And don’t discount ongoing support from your vendor after your implementation, Floyd says. “I think it’s critical to have support after the basic training and installation. You want to be able to make changes after seeing your practice patterns and how you use it.” For example, the system offers a lot of choices. All the buttons on the handsets and microphones are programmable. Each organization has to decide which button should do what function. “You need the help of your vendor to determine what is most efficient,” he says. His facility’s IT director worked with MedQuist to fine-tune the system to best meet their needs.

If you are going to keep transcriptionists as part of the process, their feedback is important, too, says Borgess Medical Center’s Lux. “We depended on a lot of input from our transcription staff once the decision was made,” she says. “They are first in line in this process and what they produce will affect the outcome.”

Lux recommends significant staff involvement in the implementation process. “It is vitally important to provide frequent communication and reassurance to the transcription staff. Keep everyone involved in the process and let them be part of it.”

She also suggests following the advice of your vendor if they suggest standardization of your current transcription practices. “Standardizing the way everyone transcribes will greatly enhance the learning power of voice recognition.”

Beth Walsh,

Editor

Editor Beth earned a bachelor’s degree in journalism and master’s in health communication. She has worked in hospital, academic and publishing settings over the past 20 years. Beth joined TriMed in 2005, as editor of CMIO and Clinical Innovation + Technology. When not covering all things related to health IT, she spends time with her husband and three children.

Trimed Popup
Trimed Popup