More patients, more studies and more documentation are driving healthcare providers to implement solutions that improve efficiency and aid in the delivery of quality patient care. Voice recognition has been heralded as a means to that end for at least a decade, but continuous improvements to the technology are making it a viable solution right off the shelf.
Sir Mortimer B. Davis-Jewish General Hospital operates the busiest emergency department in Quebec. To improve workflow and meet demand, the facility integrated wireless Pocket PC dictation functionality within their emergency room facilities.
The hospital implemented DigiDictate-CE from Crescendo in 2003 and the company added a speech recognition module to the system in 2005.
“We needed a solution to dictate on-the-fly, wherever we were,” says Stephen E. Rosenthal, MD, associate director of the emergency department and director of medical informatics. “The initial goal was to develop a back-end system for mobile solutions. We’re now in the process of going to a front-end solution for more complex notes.”
Emergency department notes must be relatively up-to-date, says Rosenthal. “In other areas, you can wait to see notes.” And since two-thirds of the hospital’s ED volume occurs outside of the standard 9-to-5 day shift, “voice recognition offers a good opportunity to do notes whenever we need them,” he says.
Initial training was minimal, Rosenthal says. The system offers a 75 percent recognition rate off-the-shelf, he says, and “within a couple of weeks, the rate easily goes up to 90 percent.”
Northeast Missouri Imaging Associates in Hannibal, Mo., a hospital-based radiology group, had a similar positive experience, according to Practice Administrator Brandon Selle. When the group went live with SpeechQ from MedQuist in the spring of 2006, “it took to their voice and learned their speech pretty quickly. It only takes about 20 to 30 minutes to train the system to your voice. Then you’re off and running.”
A survey of referring physicians revealed that report turn-around time was their biggest concern. “Being a hospital-based group, we weren’t the ones providing the transcription service so we had no control over turn-around time,” Selle says. By implementing SpeechQ, “we could take that control into our own hands.”
The group of physicians was “a little leery, but by the time they were trained, every one of our physicians actually said they love the system,” he says. The practice has seven full-time radiologists and also rotates locum tenens as needed.
As a result, report turn-around time is now just 15 to 45 minutes for a final report. The average time before was a day or a day and a half, but sometimes up to a week. Radiologists now don’t have to sign off on a report a day or two later that they might have trouble remembering. “It’s all right there on the screen,” says Selle. Plus, after no more than an hour of training, the locum tenens know what they need to use the system effectively.
Both Selle and Rosenthal say that the implementation of voice recognition warrants a thorough evaluation of workflow and an infrastructure geared to the technology. “Interfaces needed to be set up between facilities’ management systems to generate orders,” says Selle. That way, you’ll have the same patients on the voice worklist as are on the PACS or other systems clinicians are reading off of. He recommends setting up destinations for reports once they are dictated and physicians have signed off. “Set up network printers in the ED that will print reports directly to clinicians,” Selle says. “There’s also the interface on the backend to send the reports electronically into the facility’s systems so that they’re accessible and combined with patient information.”
Rosenthal says it’s common for hospitals to underestimate the setup required for voice recognition. “You need a vision of where [the technology] will go,” he says. For example, if your bandwidth is too slow, people won’t want to use the system. Plus, you won’t have the capability to eventually expand to nurses using voice recognition and performing electronic documentation, for example.
Rosenthal says that new technology will only be successful if it adapts to the way people already function. “You can buy the nicest solutions, but if people have to go out of their way to work in a different manner, they won’t use it.” That principle has played out, he says, as vendors are offering more ways to use voice recognition and bringing it into various applications. “The technology is fitting more into the way people work.” High failure rates in the early days of voice recognition were due to systems that asked busy clinicians to do things they wouldn’t normally do, he says.
Step by step
Aurora Health Care, a 14-hospital, not-for-profit system covering eastern Wisconsin, was working toward a fully electronic workflow in radiology when it ventured into speech recognition. PowerScribe from Nuance met the system’s workflow needs and “the voice was a nice additional feature we took advantage of,” says Ron Hartig, project manager.
The organization purposely implemented voice recognition on a gradual basis. One reason for that was physician resistance, Hartig says. “I think resistance is normal. Most people resist change. We wanted the end-users to embrace it. We didn’t want to force it on them.” The organization still had its old system from Dictaphone so the physicians could opt to use that system or the new PowerScribe solution. The doctors who did use PowerScribe were able to reduce their report turn-around times very quickly, Hartig reports.
Hartig views voice as another way of talking about patient care. “It provides timely information for us to support the care process,” he says. Transcriptionists are still available to physicians, but they also can self-edit their reports. Several doctors at one facility chose to do so about half of the time. Since some of Aurora’s radiologists are with independent groups, Hartig says the organization did not make anything mandatory. That first group of self-editors turned into several users who self-edit most of the time.
Aurora has three hospitals using voice with another to start within the next month. Hartig says that facilities should start their preparations for the technology earlier than they think they need to. Plus, “publicize it as much as you possibly can. It’s really important to tell the people affected the most that there’s a learning curve and that there are going to be some hiccups in the beginning.”
When Aurora spoke with other facilities about voice, PowerScribe claimed that one reported a 71 percent reduction in costs. Once more people at Aurora facilities are using the technology, he will work on calculating the return on investment. “The main thing we always try to do is look at how it’s going to affect the patient as we get faster turn-around of reports available to referring physicians. That’s where we really see the benefit.”
Rosenthal says it’s very important to implement voice in phases. “Don’t deploy [it] on a large basis until you’ve run a good pilot and it’s functioning well,” he advises. His facility has been using voice for follow-up notes and is in the process of progressing to more complex documentation. Rosenthal has noticed that people are more willing to share their experience and knowledge about technology than they were in the early days of voice. “What we’ve done in the ED is now being deployed elsewhere in the hospital. Why reinvent the wheel each time? If somebody else can benefit, I think that’s good.”
Improve and streamline processes
Dreyer Medical Clinic in Aurora, Ill., was already a Dolbey user and went live with SpeechMagic from Philips Medical Systems in September 2006. Mary Yurkovich, RHIA, health information management director, reports that the clinic went from an average report turn-around time of more than four hours to just 15 to 20 minutes.
When the clinic began looking into voice recognition, they had already implemented electronic medical records and cut transcription outsourcing. Many organizations implement voice to reduce their backlog and save money, but Dreyer had already accomplished those goals, says Yurkovich. “We were looking to improve and streamline our processes.”
During product selection, Yurkovich found that Dolbey with the Philips speech engine had the best recognition rate. She also wanted the Voice Wave Player software that eliminates the need for a transcribing station, port and dedicated phone line. “We had to be able to copy voice over and use their system for tracking our jobs and sending the work out.”
Yurkovich says that not all of the physicians have successfully made the shift to voice recognition, but those who have “are all for it.” As a result, some physicians have been pulled off the system and others put on. “It’s a long training and learning process. I give all the credit to the supervisor of transcription—she has been constantly monitoring and adjusting.” Dreyer decided not to make the system mandatory, but continually works with the physicians to improve on their quality and recognition rate.
Since physicians are the ones using voice recognition, their involvement and support is crucial. So what advice do voice implementers offer? “Make sure radiologists are involved from the beginning and make sure that they are the ones who are the key to driving implementation,” says Selle. “If you have radiologists who are going to fight it, they’ll succeed. You have to make sure they’re behind this. They can make it inefficient if they want to.”
Yurkovich says that although facilities can eliminate transcriptionist overtime and outsourcing with voice, “you can’t just accept it as it is. Continue to push your vendor and make requests for further improvements and enhancements.”
She says that implementing voice has been challenging but exciting. “I’m always up for change, but you have to be ready for the work that goes with it. Proactively promoting it with your staff is an integral part of going live with this product.”
Implement voice for the right reasons, says Selle. “Our goal was not to make money off the system, but to provide a service to the facility and improve our marketability to the community and referring physicians.”
The majority of practices will be using some form of voice recognition, either front end or back end, within the next couple of years, says Selle.
“I can see it continually evolving and improving,” Yurkovich says. She has noticed that the technology sometimes drops little words, but “when that little word is ‘no,’ that’s a big problem.” The next upgrade should catch more of those small words.
“As more users are on the system they’re going to find more issues and keep raising expectations,” Yurkovich says. “Any software vendor needs to be open and look for ways to improve their product.”