The Centers for Medicare & Medicaid Services (CMS) has mandated that facilities integrate advanced visualization imaging from CT, MRI and nuclear medicine into their EHRs/EMRs by 2015. While integration isn't an immediate need, now is the time to plan the path to meeting the mandate. Server-side processing could help facilities meet meaningful use criteria by permitting 3D image manipulation to take place on a central server before transmission to a thin client. However, several technical challenges remain.
Rasu B. Shrestha, MD, medical director of interoperability and imaging informatics at the University of Pittsburgh Medical Center, sees an opportunity to change a healthcare culture that has historically separated EMRs and imaging. "It's not only about retrieving images directly from the EMR, but also about addressing some of the fundamental flaws of delivering siloed medicine," says Shrestha.
Such interoperability on some levels is already possible, but whether or not it's meaningful is the question. In Northern California, clinicians have full access to 3D reconstructed radiology images—MIPs, MPRs and coronal reformats—through embedded URLs in the EMR that launch PACS viewers. However, this does not include 3D volume-rendered images, says Daniel Navarro, MD, the regional chief of imaging informatics for The Permanente Medical Group of Northern California and chief of imaging informatics for Oakland Radiology.
Navarro's radiologist colleagues have not fully embraced 3D volumetric imaging because of the need for dedicated workstations. Radiologists instead rely on dedicated 3D labs to postprocess the few subspecialty cases that warrant volumetric imaging. Some surgical and oncological specialists have access to volumetric imaging, but it's through dedicated tools that are not integrated into the EMR, Navarro says. "The thin-client model is an interesting possibility, but it will have to be part of the PACS, then it can be truly integrated into EMRs, so referring physicians actually have access to the images."
There have been some dramatic changes in advanced visualization in the last few years, including the exponential increase in imaging data; server-side rendering, which allows access to advanced visualization from thin clients rather than dedicated workstations; and the increasing need of other clinicians to access imaging data. However, not every physician wants to view images, and even fewer want to access 3D and/or volume-rendered images.
"The challenge is to find the right balance between having enough tools or too many tools available," says Shrestha. The industry needs to build on preliminary work focused on providing the right set of tools for the right physicians who access the studies, says Shrestha. "That level of customization will quickly become the theme of how you access data across the board, not just the tools available, but even the types of data that get presented to you."
The push for meaningful use is mainly about having all patient information available to any clinician in the continuum of care. The problem is that this information cannot be taken out of context. "There are scenarios where standard imaging and clinical data from the EMR work together for clinicians to make better and faster treatment decisions and for radiologists to provide better reporting," says Khan Siddiqui, MD, principal program manager for the Health Solutions Group at Microsoft and chair of the IT and Informatics Committee for the American College of Radiology. "It's the same with advanced imaging: The clinician only wants the information if it will be meaningful."
Some institutions have built applications that provide clinical context such as lab values, notes and prior reports along with the study being interpreted by radiologists. "Conversely, if specialists don't have the ability to use advanced visualization tools, to manipulate 3D and volumetric images, then they are not looking at the images in the proper clinical context," Siddiqui says.
"All vendors are building advanced visualization programs, but how do you build them in a way that will integrate in a meaningful manner into other people's applications and contextually launch with that application? That is the big challenge," he says.
Two of the primary challenges are input devices and user interfaces. When Khan and former colleagues at Baltimore VA Medical Center evaluated how radiologists interact with reading stations, they found