To Compress or Not Compress: The Image Compression Conundrum
 Outside the world or radiology data compression is just about as common as daylight. Look around you. Are you listening to an MP3 player? The files contained on it are compressed music files that contain but a fraction of the electronic information that is available on a compact disc. Yet, compressed music files such as those found on MP3s are the way the world has gone. Can your ears really tell the difference? Some audiophiles think so, but others doubt it. And it really doesn’t matter if there is a difference. It’s just music, no one’s life or career is on the line.

But that’s not so in radiology. Though introduced years ago to combat the rising problem of transferring large imaging datasets throughout a healthcare enterprise, and also to combat mounting storage problems, compression has remained a controversial subject. The first controversy arose from a generally, and perhaps understandably, nervous physician community – comprised of radiologists and radiology informatics specialists, as well as other specialties – that wanted nothing to do with compressed images for fear that they’d get sued if some mistake was made. Compression, they believed, was an easy target for hungry lawyers looking to nail a doctor.

The varying types of compression causing the stir basically break down into two forms. There’s “lossless” compression which decreases the size of a file by 2:1 to 4:1. This level of compression is not that substantial, but the benefit is that the image is then restored to full fidelity once decompressed. The more common form, known as “lossy,” is able to compress data far more but it does not restore images back to the original form and depending on the severity, a large amount of data can be lost. The old debate hinges on how much information can be lost so that an image is still considered diagnostic quality?

More recently, other types of solutions have come forward, called “just-in-time” delivery by some, which send large data sets a tiny bit at a time as they are needed at a workstation so a network is not bogged down all at once. This is a highly dynamic sort of technology that might lead some to wonder whether compression is even relevant anymore.

In fact, compression is becoming top of mind because at the SIIM 2007 meeting in Providence, R.I., in June, the organization announced the launch of an accelerated research project to evaluate compression’s uses in radiology. Results of the study are expected by early next year. The SIIM study will focus especially on what level of compression can be applied to 3D data sets.

Health Imaging & IT sat down with one of the SIIM study participants Eliot Siegel, MD, professor and vice chair of info systems, University of Maryland Dept. of Diagnostic Radiology, and director of imaging at VA Maryland Healthcare Systems. Siegel is a well known advocate for compression in radiology.

The common compression level, generally considered to be of suitable diagnostic quality for interpreting images, is a ratio of 10:1. Yet, that reduction in file size has caused an inflated amount of trouble, you could say.

Image compression has been discussed, Siegel believes, more than any other subject in diagnostic imaging that he is aware of. An enormous amount of space has been given to the subject in journal articles regarding how and when to do it, and despite all of that, it’s surprising how little it is taken advantage of and how much fear there is out there regarding the issue, Siegel says.

He goes so far as to say that “compression is a tiny, tiny drop in the ocean in comparison to lots of things who people don’t question at all [in radiology].” Yet facilities with PACS today that make the decision to use compression must go through piles of paperwork, which must go through legal, just to have the functionality switched on by a PACS vendor.

The fears are simply not justified, according to Siegel. The human visual system is “pretty resilient” when viewing compressed images, which VA Maryland studies have proven, and in instances of 10:1 compression, the human eye cannot tell a difference.

However, the same cannot be said for computer systems such as computer-aided detection (CAD) that need to look at the pixel information in images to work effectively. So, it’s not yet clear what impact compression would have on those systems, according to the research at VA Maryland.

Yet, despite all of the research understandings that still abound. “The thing that most people don’t understand is that inside a CT scanner or inside a DR or CR system, or whatever type of system, there is an incredible amount of modification that is taking place to the original raw data that was acquired by the detectors of whatever the device was,” says Siegel.

Hence, he believes that compression has been way underutilized, and one of the reasons is that physicians are tied up with unnecessary fears related to medical-legal issues which are generally misplaced, especially in comparison to troubles that could arise from the use of non-standard types of image processing.

And standards would help in the world of compression immensely because then results would be more repeatable. “What we’ve seen is big differences from one compression technique to another. What I’d love to see would be a standard algorithm, whether it’s the best or not.”


The rub

Utilizing newer forms of compression has been a major focus of the work of Paul Chang, MD, FSIIM, professor and vice-chairman, Radiology Informatics, medical director, Pathology Informatics, University of Chicago Pritzker School of Medicine medical director and Enterprise Imaging at the University of Chicago Hospitals. Chang is known as sort of a counter argument to Siegel’s pro-compression stance. He helped pave the way towards the just-in-time approach of transferring full-fidelity radiographic images without compression. The technology he co-developing is a wavelet transform capability for web-based thin-clients called Dynamic Transfer Syntax (DTS). DTS was commercialized by The University of Pittsburgh and Stentor (founded by Chang), which is now owned by Philips.

But Chang wasn’t always in the business of developing alternatives to standard compression. When compression first entered the radiology roadmap, he was a supporter. Like a lot of facilities, the University of Pittsburgh where he then worked was looking at ways to get images more smoothly and quickly through the hospital’s network infrastructure. They were looking at alternatives to viewing images on expensive workstations that would have to be placed in operating rooms and other locations in a ubiquitous fashion throughout the hospital. This would be very costly, so compression seemed a good alternative that would enable the data sets to be transferred via the web to surgeons and other specialists at a lower cost. This, however, did not go as he originally planned.

“The surgeons basically said ‘no, that’s unacceptable,’” Chang says. And they asked him a very simple question: “Can you look us in the eye and tell us if, as a radiologist, you would make a final interpretation based on these compressed images?”

And Chang’s answer then and now is, “no” – because it’s not acceptable to the surgeons, or at least many surgeons, and thus it cannot be good enough for radiologists. And this essentially comes down to the fact that there is no legal precedent for the use of compression in diagnostic imaging. Basically, thus far, no physician has been sued and won in a case where compression was used.

No one wants to be the fall guy, so to speak. Radiologists who work with physicians in a hospital supply the sort of technology that they are comfortable using, and compression, at least right now, is not one of them.

“Many times a physician – a surgeon, oncologist, or neurosurgeon – is going to make a medical call based on the image. They are taking the management decision based on interpreting the image. I believe it is unconscionable for me to give them something that I wouldn’t feel comfortable using to make a primary diagnosis. That’s the reason I had to reject compression from day one,” Chang says.

While he rejects compression, Chang agrees with Siegel that 10:1 ratio compression “probably is indistinguishable” to the human eye no matter what “flavor” of imaging or compression you’re using. But, “people vote with their feet. In other words, I don’t care whether you think it is. I just ask the question: if you’re going to give this web client to other people, are you willing to dictate and make primary diagnosis off what you are giving them? If the answer is yes, I’ve got no problem.”

“But if you say ‘hell no’ which is what most people do, then I say, well basically, it’s a non-starter. Because what you’ve done is created a two-class system. You’ve created a system where the radiologists get access to the real good stuff and you’re giving everybody else something that you don’t feel comfortable interpreting. Why should they?” Chang says.

These aren’t the only reasons why Chang doesn’t support compression. Technological evolution in other areas has made compression less attractive, especially as imaging technology has changed and storage technology has become cheaper.

Data sets, CT for example, have become so large that compression cannot reduce them in size enough to make them transferable in modern networks. In these days of 64-slice CT, the 50 image data sets of yesterday have become 2,000 slices with 1 gigabyte (GB) of information or more. Yet 10:1 compression can only reduce such a study to about 100 MB.

“Trying to send that around wirelessly or through other means is really not practical, even now. So, compression doesn’t solve the problem anymore,” says Chang.

Another argument that can be made in favor of standard compression is to save storage space, especially in the case of older studies. An enormous amount of storage could be saved if many of your studies are stored 10:1.

But this too, Chang feels, has become a non-issue as mass storage, made cheaper by the likes of Google (the company offers 2 GB of free storage to all free Gmail users) and the likes of YouTube with all of the zillions of videos it now stores, gobbling up enormous amounts of storage capability.

However, Chang notes that there are some PACS vendors that have not embraced the commodity archive design, and antiquated archives can be very costly to update. But most of the major players have understood that mass storage is a commodity and they have outsourced this to EMC, IBM or other mass storage provider as a cheap alternative. The wrong approach, he feels, is to use compression to get a little more life out of antiquated storage architecture.


Just-in-time

So many of these issues could be skirted, as Chang says, by considering an alternative to conventional compression.

A model such as DTS that Chang co-developed includes a dynamic intelligence that evaluates the way you are using your PACS workstation which considers how many images you need at any given time and sends them as you need them. It keeps up and anticipates. Rather than sending a gigabyte at a time, it’s sending about 500 kilobytes at a time.

This sort of delivery has come to be known as just-in-time delivery, but others call it other things, such as on-the-fly compression. Using this approach, for example, a single image could be sent every second or so. This is manageable, and could be accomplished using wireless. The software sends one slice at a time, and evaluates radiologist behavior to understand what is needed next.

“A user can’t tell whether the slices are already on the PC or if they’re coming from the server because it’s sending it as they need it, or just before,” says Chang. Most vendors now have some version of this type of delivery system “which obviates the requirement for compression” in his view.

Siegel also believes that “on-the-fly” compression is very useful, because it offers images in a way tailored specifically to an individual PC, exactly when they are needed.

For example, VA Maryland’s 3D vendor TeraRecon uses similar technology so that “we don’t have to send the entire dataset, but we open up a window to the computer server and we’re able to do manipulation of the image, not on our computer, but rather on the server remotely. It only has to send us the rendered information rather than the entire dataset.”

Given the benefits, just-in-time delivery has become a fairly revolutionary approach, especially in instances where conventional compression doesn’t suffice due to the current massive size of many radiology data sets.


Conclusion

There’s obviously still a lot of room in the radiology world for standard compression. Not every facility can afford the latest PACS or storage architectures that some view as nullifying the benefits of compression: easing the sharing of files across a network, and saving storage space, as examples.

Physicians – and healthcare facility legal departments – remain hesitant though about adopting 10:1 compression even after all of these years of debate and the efforts of physicians and researchers such as Siegel to educate people regarding its usefulness.

Any facility tired of the argument and looking to avoid compression altogether, and with the financial wherewithal to boot, can follow Chang’s path and adopt just-in-time applications. Then you can “sleep at night” and not worry about it. More answers, too, will come from the SIIM compression study that are expected early next year. Stay tuned.
Trimed Popup
Trimed Popup