Getting the Skinny on Advanced Visualization

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon
Anomalous right coronary artery segmentation with curved planar reformation view. Image courtesy of Visage Imaging.

Thin may be in when it comes to deployment strategies for advanced visualization clients, but dedicated workstations still have a place in the clinical continuum. Although developers are resolutely focused on streamlining the delivery of 3D diagnostic imaging tools to the desktop, standalone systems that can deliver the capabilities of the technology are remaining viable practice partners. The objective: viewing, sharing, and collaborating on images to quickly and easily enable a multidimensional diagnosis.

For many diagnostic imaging practices, dedicated advanced visualization workstations have become legacy equipment. Moore’s Law, dedicated graphics processing boards reaching commodity prices due to the popularity of computer games, rapid advancements in 3D imaging software algorithms (also due in part to the gaming industry) and a robust DICOM standard for post-processing have resulted in a sea change toward thin-client implementation for medical imaging informatics.

Technology in practice

Radiology and cardiology are arguably the two medical specialties that have most warmly embraced advanced visualization software. The explosive growth of multi-detector CT equipment and high-tesla strength magnets for MRI systems has resulted in an “image overload” that threatens to overwhelm practices.

“The whole world has been changed by the advent of ultra-fast, multidetector cardiac CT,” says Robert S. Schwartz, MD, FACC, a cardiologist with the Minneapolis Heart Institute in Minneapolis. “It gives us extremely rapid, three-dimensional images which allow us to capture the entire beating heart.”

The downside to this achievement is that massive amounts of data are generated to deliver the 3D data sets from today’s multidetector CT systems, Schwartz notes. “One needs to be able to handle those massive amounts of data in a very efficient and facile way to make a diagnosis,” he observes.

Eliot L. Siegel, MD, professor and vice chairman of the University of Maryland School of Medicine department of diagnostic radiology and chief of radiology and nuclear medicine for the VA Maryland Healthcare System, observes that the current generation of CT systems presents an interpretation challenge for radiologists. While a cardiac CT angiography (CCTA) study can reach 2,000 or more images, other cardiac studies routinely generate from 6,000 to 8,000 images. “For the new generation of dual-source CT scanners, cardiac imaging studies can create as many as 15,000 images,” Siegel says.

It’s when this tidal wave of acquired image data shows up for interpretation that advanced viz technology, thick or thin, makes its greatest impact.

For post-processing of advanced imaging, cardiac and vascular CT studies can be particularly problematic, and that’s where a bottleneck often occurs. While data acquisitions are accomplished in 12 to 15 seconds, and patient table time requires roughly 5 to 10 minutes, post-processing can take anywhere from 20 to 60 minutes, depending on case complexity.

Acquisition abundance

Clinicians, for the most part, have little interest in doing advanced image processing, because practice volumes simply do not allow an interpreting physician the time needed to accomplish the task.

“We simply could not handle the huge amount of interpretative data being put out by these systems in an efficient manner,” Schwartz notes. “The workload we would have had by not having advanced visualization technology in place would have been simply staggering.”

Prior to the deployment of advanced visualization tools in his practice, the interpretation time for a CCTA procedure was anywhere between 20 and 30 minutes per exam.

“Since we’ve deployed our advanced visualization system, our read time has dropped down to 3 or 4 minutes for the more straightforward exams. More complex exams, of course, take a little longer,” he says.

A dedicated workstation, or thick client, moves a DICOM dataset from place A to B, either over a local-area network or a wide-area network such as the internet, where it is processed and then made available to other users. A thin-client model pushes the DICOM dataset to a centralized server where the processing takes place and allows it to be manipulated by users across a network. It’s in the more complex exams where legacy dedicated advanced visualization workstations can continue to play a role.