CMOs discuss next-generation speech recognition

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon

ORLANDO–The next-generation of speech recognition (SR) technology holds the promise of being more than a tool to convert speech to text; by incorporating natural language processing (NLP) tools and a controlled medical vocabulary, mapping and distribution of administrative codes and medical terminologies may enable interoperability, billing, and access to decision support for clinicians as part of an electronic medical record (EMR).

According to Nick van Terheyden, MD, chief medical officer at Laytonsville, Md.-based Phillips Speech Processing, the implementation of enterprise-wide healthcare information technology (HIT) systems, such as EMR applications, promises to provide more cost-effective care in the future.

The addition of SR technology to these systems can further reduce practice overhead, while providing a competitive advantage to the groups that implement them.

“Speech recognition can reduce costs by 30 to 40 percent, and early users will have a very high competitive advantage,” van Terheyden said during a presentation on Tuesday at the 2008 HIMSS conference.

Not only can transcription costs be reduced with currently available technology, the next-generation of SR tools will allow clinicians to interact with medical standards even before a dictation is complete, according to Mike Levy, MD, chief medical officer of Aurora, Colo.-based Health Language.

“The language engine works simultaneously with the speech engine allowing for real-time conversion of text to standards,” Levy said. In addition, an EMR may be set up to allow smaller sections of dictations for various components of the record such as history, problem lists, medications, and so on. The EMR or other application may then permit the clinician to select more specific or additional codes to further standardize the information.”

By bolstering an SR with NLP technology and interfacing it to a language engine with mapping capabilities to controlled medical terminology, such as the Systematized Nomenclature of Medicine-Clinical Terms (SNOMED-CT), the software can function as a decision-support tool.

“The current interaction with the patient usually occurs separately from the ability to look up referential material pertinent to the patient,” Levy noted. “But the introduction of speech and conversion to standards within an electronic medical record enables real-time information.”

For example, according to Levy, during the patient encounter, the clinician could dictate into the SR system and be presented with pertinent information such as: decision support including drug information and drug interaction checking; clinical pathways to provide the latest evidence-based information about treating diseases; patient information such as handouts that can be given to the patient at the point of care; order sets such as recommended medications, tests, labs, and other orders pertinent to the patient; and suggested billing codes.

According to van Terheyden, providing clinically actionable data is the key to solving fundamental challenges with EMRs such as: capturing data at the source to input into the EMR, supporting clinical decision-making with clinically actionable data, and providing tools that enable the capability to catch errors before they’re committed.

“Speech recognition and natural language understanding bridges the gap between clinicians and technology,” van Terheyden said.