Institutional examination-specific dose metrics could be misleading

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon
 - NEMA takes strides toward reducing CT dose

Institutional examination-specific dose metrics could be misleading because the least-benefitted patients could disproportionately contribute toward “improved” averages, according to a study published in the May issue of the American Journal of Roentgenology.

Benchmarking procedures for radiation dose are evolving and bringing an important issue to the forefront: radiation-induced cancer risks are dependent upon patient characteristics such as age, sex and life expectancy, and in many institutions older patients or those with low life expectancies are a disproportionately large part of diagnostic radiation exposures.

“This means that an institution's average reported dose level for a specific examination type, regardless of the specific metric used, will draw largely from patients who are the least likely to benefit from dose-reduction initiatives,” wrote lead author Jonathan D. Eisenberg, of the Massachusetts General Hospital Institute for Technology Assessment in Boston, and colleagues. “When an aggregate quality metric draws substantially from a population that incurs the least associated benefit from the intervention, by definition, the value of the metric can be diminished.”

The authors also pointed to the reality that in some institutions, reduced dose means reduced image quality, which could be the best decision for patients who are healthier and at a higher risk for radiation-induced cancers. However, in patients who are older and sicker, small benefits provided by improved image quality could outweigh the benefits of dose reduction.

Eisenberg et al evaluated whether examination-specific radiation dose metrics are a reliable indication of an institution’s success in reducing cancer risks by considering a hypothetical institution seeking to decrease its average effective dose for abdominopelvic CT. Utilizing modeling techniques, the researchers projected radiation-induced cancer risks and tertiary center data to understand the institution’s abdominopelvic CT age distribution. They then compared a program in which effective doses were reduced equally, from 10 to 7 mSv, across all scans with programs in which dose reduction was age dependent. Eisenberg and colleagues projected lethal cancers averted, life expectancy gained, and average institutional dose achieved for each program.

Analysis of the age distribution was based on 20,979 CT scans, 39 percent of which were from patients 65 and older. If all patients in the institution underwent 7 mSv, the maximum number of lethal cancers averted was projected to be seven per 100,000 patients. The maximum life expectancy gained was estimated at 0.26 days per patient.

When dose reduction was restricted from 10 to 7 mSv to patients younger than 65, the benefits were slightly lower, with five lethal cancers averted per 100,000 patients and 0.22 days per patient gained. However, the average institutional dose was substantially higher at 8.2 mSv.

While dose reduction in patients 65 and older was attributed to 16 percent of possible institutional life expectancy gains, the patient group disproportionately contributed to the institution’s average dose at 39 percent.

“Tailoring dose-reduction efforts to preferentially affect younger healthier patients, thus allowing elderly patients or patients with low life expectancy the benefits in image quality that may be afforded by higher radiation doses, may compromise an institution's performance metrics, even though their efforts may be appropriately patient centered,” wrote the study’s authors. “Our findings emphasize the need to consider more granular patient-centered benchmarks when evaluating an institution's performance in radiation dose reduction.”