SIIM: Experts outline the promise and peril of quantitative imaging
warning, stop, caution - 46.01 Kb

ORLANDO—Quantitative imaging represents imaging’s next great frontier, according to its proponents. Skeptics, however, question whether these techniques are ready for prime time. Luciano M.S. Prevedello, MD, of Brigham & Women’s Hospital in Boston, and Adam E. Flanders, MD, of Thomas Jefferson University Hospital in Philadelphia, shared the optimists’ and pessimists’ views on quantitative imaging during a June 8 session at the annual meeting of the Society for Imaging Informatics in Medicine.

Katherine P. Andriole, PhD, Brigham & Women’s Hospital in Boston, set the stage for Prevedello and Flanders, noting that many current radiology reports do not contain quantitative data. Yet referring clinicians clamor for such data as they need to know whether a tumor is increasing or decreasing in size and by how much.

Quantitative imaging could fill the gap by extracting quantifiable features from medical images for the assessment of normal tissue versus disease with ability to assess the degree of change over time.

Prevedello listed the array of clinical applications, which includes carotid artery stenosis, calcium scoring, coronary artery stenosis, lung nodule volumetry, liver and tumor volumetry, brain perfusion, CT colonoscopy and emphysema quantification.

But Flanders questioned the validity of the applications. “Vendors have made it easy to create quantitative data from imaging datasets. How reliable are these? Will all vendors give us the same results on the same dataset?” He noted that multiple voids need to be filled.

Different parameters can affect quantitative imaging values. Flanders offered measurement of stenosis in blood vessels as an example. Carotid lesions can be complicated, making it difficult to produce reliable measurements. Changing inputs such as slice thickness, vendor platform and algorithms can yield different results.

In fact, Flanders offered that the standard deviation of quantitative measures in CT brain perfusion can reach or exceed 50 percent, casting doubt about the validity of relying on these measures. In fact, a meta-analysis published in Annals of Neurology in September 2011 did not support the use of perfusion-weighted imaging to improve outcomes for stroke patients and called for greater consistency of methods.

Flanders cited a second example, explaining that widely used physiologic MRI tests depend heavily on pre-processing, filtering, pre-defined thresholding and non-standardized algorithms.

Given the tremendous uncertainty and variability surrounding quantitative measures, Flanders concluded, “We [radiologists] need to be responsible. We are in a precarious position. We are giving these tools to clinicians and they are generating their own quantitative results without understanding the data behind it and then making a clinical decision support.”

However, quantitative imaging continues to develop at near breakneck speed. “We can improve the clinical reliability of what we measure,” said Flanders. Advances such as reference datasets, validated algorithms, reproducibility and recommendations with automated measures all promise to improve the utility and value of quantitative imaging.
 

Trimed Popup
Trimed Popup