Taking Speech to the Next Level

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon

Standardization, clinical decision support, integration and structured reporting all are part of speech recognition’s path to the future, with many vendors well on their way to offering these advances in the near future. Some are already on the market.

Stephen E. Rosenthal, MD, associate director of the emergency department and director of medical informatics at Sir Mortimer B. Davis-Jewish General Hospital in Quebec, Canada, began piloting a mobile speech solution in the emergency department back in 2003. While that was primarily for progress notes and other on-the-fly documentation, he’s now working with Cresendo Systems to help physicians do more complex—but still mobile—documentation. “The nice thing about where we’re going is the multiplatform potential, especially with foreground documentation.”


All about integration



Hospital and departmental information systems need to become more integrated before users can, for example, go on any device and choose a particular kind of note to document. Then, users can get a default patient list—only those patients who fit the category selected. “Selecting your patient has to be easy for doctors and reliable,” Rosenthal says. And while speech technology allows for automation, macro development and templates, Rosenthal cautions against creating hundreds of templates, for example. “Nobody wants to use it if it’s too complex. It cannot take longer than writing.”

The goal of standardized documentation allows for searchable structure, Rosenthal says, which will eventually allow for clinical decision support. Reference material and other background information can help clinicians determine the best test to order and develop appropriate care plans. Structured documentation also becomes searchable for better auditing and other tasks for improved patient care.

Chris Spring, senior product manager for SpeechQ at MedQuist, also sees documents themselves becoming more dynamic in the future. For example, a physician dictating a report can click on a condition or symptom in that report and link to reference information that can assist in the diagnosis of a particular exam. He also sees the ability to tie CPT coding into documents.

Jason Koller is director of RIS and SpeechQ for Inland Imaging, a group of imaging centers headquartered in Spokane, Wash., where SpeechQ version 1.2 was beta tested. “With this release comes a new and improved integration between the Philips iSite [PACS] and SpeechQ,” he says. The integration is more bidirectional so users can open images from iSite and have that drive SpeechQ and vice versa. “Instead of having to hunt and peck for images in iSite, they can work off of a SpeechQ worklist.”

Another integration aspect of the new SpeechQ that Koller appreciates is the ability to break integration between iSite and SpeechQ. That will be helpful when radiologists are signing off on reports or even when they’re doing rounds. When pulling up a lot of studies within iSite, they can temporarily suspend integration.

Integration is a very, very important part of making speech recognition technology easier for physicians to accept and use, says Spring. “The physician lives at the desktop. We’ve already demonstrated improved patient care and return on investment with speech recognition. Now we have to make it easier for the physician to accept the technology.”

Before digital dictation, Inland radiologists spent a lot of time re-reviewing exams, Koller says. Now, radiologists can spend more time on each exam, but the organization gets more work out of the radiologist per day. “We can grow with our existing base of radiologists,” he adds. Before implementing SpeechQ, Inland typically had 4,000 to 5,000 reports in the queue awaiting transcription. Today, 90 to 95 percent of reports are turned around in 30 minutes.


Craving critical test results


Terence Matalon, MD, chairman of the department of Radiology for Albert Einstein Medical Center in Philadelphia, has been using PowerScribe from Nuance for about two years. When he joined the facility in 2003, average report turn-around time was 112 hours. Thanks to the implementation of both Fujifim’s Synapse PACS and PowerScribe, turn-around time is now 12 hours. Although that’s a significant improvement, turn-around time “is a moving target,” he says. “There’s always going to be an increasing demand to shorten that time. The ideal is to have a completed, verified report at the completion of the exam.”

While the facility’s users have been pleased with the product, they’re even more excited now due to recent integrations with critical test results utilization. The hospital already purchased Vocada’s Veriphy—designed to ensure and document the transfer of critical test results to referring clinicians—before Nuance acquired the company last year. The integration of Veriphy into the PowerScribe desktop increases productivity, Matalon says, by eliminating the need to have two applications open to attain the same goal.

With the timely communication of critical test results being one of The Joint Commission’s new patient safety goals, Matalon says it’s important for institutions to be able to produce reports on communication of critical test results. “It’s just a fantastic tool to both improve quality and to decrease the amount of wasted time we would have on the radiology side to ensure that a communication was made.”

Now that it’s an integrated product, users don’t have to re-enter patient information. Another plus is that users don’t have to remember to click on a separate icon to open such ancillary applications. The facility uses RadPeer, the American College of Radiology’s quality assurance program. Much like the integration of Veriphy, RadPeer is now integrated with the facility’s PACS.

Those kind of streamlining efforts are what will differentiate one speech product from the others, he says. “There are dozens of products that can reliably show you the current exam, prior exam and reports. The differentiating factor is how well they integrate with third parties and how well they reduce amount of work involved in interpreting reports and generating reports.”


Nursing in the loop


Meanwhile, Philips Speech Recognition Systems is working on nursing documentation and tying in more content for decision support. “In essence, we want to showcase a patient encounter module that allows you to dictate freely and structure the information as much as possible,” says Klaus Stanglmayr, strategic product marketing manager.

Currently, Philips is showcasing a prototype that works with VoiceViewer which provides nurses with a recorder that they carry to the bedside. Anything they enter into the recorder is converted from speech to text and inserted into the appropriate template or IT system.

“We think the key is even tighter integration into systems people will be using in the future,” he says. “I think we can see this move from speech not just delivering text but really becoming part of whole workflow, becoming part of the way people document.”

Interoperability and the ability to exchange data between systems and countries is becoming more and more critical in Europe, Stanglmayr says. Standardized terminology would prevent the need to have data translated from one language to another. While he used to see clients focused on documents, now countries such as Spain and Norway have initiated regional programs.


Going forward


Rosenthal has noticed that the skepticism surrounding speech recognition of five years ago is gone. “People see what I’m doing and every single department wants in. Everybody acknowledges that they want a readable, standardized chart,” he says. “From what I’m hearing, there’s an openness to get notes done faster in the foreground.”

However, if speech systems aren’t relatively uniform, he says, people will find systems on their own and use them. “Then you have a hodgepodge of systems that don’t talk to each other and standardization is lost,” he says. “We are much better investing in something uniform.”