Speech recognition has reached a landmark. The technology has made the leap from early adopters and pioneers into the mainstream. “Speech has come of age,” sums Linda Reino, chief operating officer of Medquist, Inc.
Historically, speech overpromised and underperformed, says Nick van Terheyden, MD, director of business development for Philips Speech Recognition Systems. Those days are history. The current generation of solutions delivers, continues van Terheyden, with back-end systems bringing as much as a 30 to 40 percent increase in productivity. Turnaround times drop dramatically — as much as 90 percent — with speech, says Nuance Communications Senior Vice President Peter Durlach. And front end that completely bypass transcription and rely on the physician to edit reports can bring additional value. What’s more, the systems pay. According to MedQuist, some customers have realized ROI in as little as three to twelve months.
At RSNA 2005, vendors displayed desktop integration with RIS and PACS. This year, the newest systems from major vendors offer a high level of flexibility, enabling users to flip between front and back end recognition. Most systems don’t force physicians to self-edit, but many (even the skeptical) adopt self-editing when they realize its simplicity and efficiency. According to Nuance, up to 80 to 85 percent of PowerScribe users self-edit. What’s more, simple one-click and voice commands boost productivity and simplify the speech process. Both advances are sure to increase acceptance and adoption.
The current generation of goodies and demonstrated results are sure to tempt a wealth of new users from all avenues of healthcare, but speech is still young.
What’s on the speech horizon? Current systems like MedQuists’s SpeechQ provide XML output and send data to the clinical system in a standardized format. “This is the first step toward populating the clinical record [through speech],” notes Reino. Van Terheyden predicts the advent of a clinically driven workflow enabled by speech. The partnership between Nuance and decision support provider AMIRSYS demonstrate the additional clinical value speech can deliver. In the not too distant future, speech solutions will extract clinical data, provide feedback, facilitate knowledge exchange between the radiologist and clinician and integrate with the EHR, says van Terheyden.
AMIRSYS announced the integrations of STATdx Clinical Decision Support system with the Dictaphone PowerScribe for Radiology solution from Nuance and with Commissure’s RadWhere Suite.
STATdx point-of-care, Clinical Decision Support system for imaging is designed to support the busy practicing radiologist by increasing speed, accuracy and diagnostic confidence in complex cases. STATdx also assists surgeons, neurologists, women’s health centers and emergency departments improve patient care. Now fully integrated with Dictaphone PowerScribe, STATdx streamlines workflow by reducing the time required to research and complete a difficult imaging analysis. Its digital format sets it apart from a textbook and ensures enterprise-wide access to on-demand reference tools at the point of care.
The agreement with Commissure fully integrates STATdx with the RadWhere Assisted Diagnosis module; a component of the RadWhere reporting suite that is driven by LEXIMER, a patented Natural Language Understanding algorithm. The integration reduces the time required to research clinical content by analyzing report content or accepting a spoken request for information at the time of interpretation and dictation.
Crescendo announced a new front-end speech recognition module for its MD Center- XL dictation, speech recognition and electronic signature application.
MD Center- XL now automatically produces text from dictations right on screen. Voice commands allow physicians to navigate, edit, format, spell, play, fast forward, rewind, select and sign-off the report without using a mouse or keyboard. With the launch of this front-end module, MD Center- XL now allows physicians to oversee the entire documentation workflow, from dictation and correction to sign-off and final report distribution, using one single interface. The module allows healthcare facilities to leave their document creation options open: switching from back-end to front-end recognition may compensate for transcription resource shortage or periodic peaks of activity. A facility may decide that short