Radiology: One-size-fits-all comparative effectiveness model not fit for imaging
Not titled - 21.57 Kb
Fundamental differences between diagnostic imaging and other areas of medicine drive the need for an individual framework for the assessment of the value of diagnostic tests, according to a commentary authored by the Working Group on Comparative Effectiveness Research (CER) for Imaging and published in the December issue of Radiology.

“The importance of CER for diagnostic imaging will almost certainly grow in the future with the development and diffusion of new testing and treatment options,” noted Scott Gazelle, MD, MPH, PhD, director of the Massachusetts General Hospital Institute for Technology Assessment in Boston.

“Payors will ask probing questions about whether and in what situations diagnostic imaging improves patient health, whether benefits of testing outweigh the risks, and ultimately whether imaging in particular situations is a good value for patients and the healthcare system given reasonable alternatives. All stakeholders–researchers, payors, providers, patients, policy makers, and manufacturers–would benefit from more guidance on the appropriate outcome measure for comparing diagnostics.”

The researchers contended the CER model for diagnostics should not be “One Size Fits All” but recommended matching the types of outcomes measured to specific characteristics of the diagnostic.

The researchers presented a framework for determining the appropriate outcome measures for evaluating diagnostics that includes three elements:
  1. the size of the at-risk population;
  2. the anticipated clinical benefits; and
  3. the potential economic impact of the technology.
In general, according to the framework, diagnostic imaging technologies affecting large numbers of patients that have a relatively small expected clinical benefit and that are expensive should be evaluated using higher-level outcome measures, such as the impact of the diagnostic on society. In contrast, lower-level outcome measures are appropriate when the number of patients affected is small, the anticipated net clinical benefit is large, and the test is inexpensive.

“No guidance exists today to inform what types of endpoints may be appropriate for different types of diagnostic imaging,” said Peter Neumann, head of the Center for the Evaluation of Value and Risk in Health at the Institute for Clinical Research and Health Policy Studies at Tufts University School of Medicine in Boston.

“When researchers want to compare how a new diagnostic stacks up against an existing one, they have lots of choices. For example, they could simply measure if the new diagnostic does what the manufacturer says it will do. Or, they could measure if the new diagnostic is more accurate than the existing one, if it is more likely to change treatment patterns and so on. Which of these endpoints is appropriate and in what situations? Our model attempts to clarify that.”

Additional benefits of the framework include creating a common language for talking about the levels of outcomes data desired; promoting greater transparency around the dimensions of value for providers, patients, and payors; and promoting better patient care stemming from an increase in timely and actionable data on a new product or new application’s value.

Trimed Popup
Trimed Popup