BMJ: Assessing hospital quality based on mortality creates hazy results
Using mortality rates to measure hospital quality is easily measured and seemingly important, but tends to create bias in hospital performance. This bias can be alleviated by basing performance on other outcomes and clinical processes, according to an article published April 20 in the British Medical Journal.

“Standardized mortality rates are a poor measure of the quality of hospital care,” wrote Richard J. Lilford, PhD, of the University of Birmingham in England, and Peter Pronovost, MD, PhD, of the Johns Hopkins University Medical School in Baltimore, in their analysis.

According to Lilford and Pronovost, mortality rates have become a large basis of measuring hospital performance. In England, the Mid Staffordshire Hospital in Stafford, England, is now under investigation for exhibiting high death rates and the researchers said that mortality should “not be a trigger for public inquiries such as the investigation at the Mid Staffordshire Hospital.”

With measuring performance with mortality, the researchers said, “the problem stems from the ratio of a low signal (preventable deaths) in relation to high noise (deaths from other causes)."  However, they said that “a common but naive response is to argue that risk adjustment to produce a standardized mortality ratio (SMR) solves this problem.”

The researchers offered that clinical trials often study a case-mix to observe differences in risk-adjusted mortality and quality outcomes. They offered that risk-adjustment “can exaggerate the very bias that it is intended to reduce,” calling this process “constant risk fallacy.” Comparing various facilities on this basis may exaggerate mortality rates at one facility while deflating them at another.

According to the researchers, across Europe, hospitals see a 60 percent variance in SMRs. The researchers pinpointed that part of this variance stems from little or no correlation between how well a hospital performs during one patient case compared to its performance during another. "The quality of care within hospitals are much greater than differences between hospitals," they wrote.

“Mortality rates, like knives and nuclear particles, are neutral; it is the use to which they are put that has moral salience and that will determine the balance of benefits and harms,” the authors wrote. “We believe that it is not collection of mortality rates per se that is wrong, but rather the use of mortality rates as a criterion for "performance management.”

The researchers recommend that rather than using mortality rates as indicators of performance, the results should be based off error rates and clinical process.

To render these results, researchers instead suggest that a process audit be performed to measure hospital performance and outcomes. “Process audits can provoke improvement wherever there is headroom for better performance, including where the hospital is an average of above average performer,” the researchers wrote.

While Lilford and Pronovost noted that process measures are more expensive to track than outcomes, they said that with the evolution of EMRs, this process should eventually become effortless.

Because researchers at a facility have the potential to be biased toward their own institutions, the researchers suggest that it could be beneficial when tracking these measures to have physicians at one organization review clinical process and comparative measurements of another institution to avoid bias.

“We incline towards a bottom-up agenda for quality improvement and would advocate performance management at one remove, by ensuring that clinical teams have systems in place to monitor quality rather than collecting large amounts of poorly calibrated information centrally.”

While there is a need for an improvement of quality measurements, the proper strategy is still debatable.

The authors concluded: “We do not pretend to have all the answers; the science needs to mature, not only to improve the measurement of quality, but also to learn how to use the (inevitably imperfect) measurements so that they do more good than harm.”

Trimed Popup
Trimed Popup