Despite a genuine effort to track performance and improve the quality of healthcare in the U.S. and abroad, there is little evidence that these prevalent but garbled processes have yielded better patient outcomes. According to the authors of an April article in Health Affairs, quality measurement needs an overhaul on the part of providers and oversight by a new government agency.
Substantial shortcomings in the quality of care are causing needless patient harm and increasing healthcare costs, argued Peter J. Pronovost, MD, PhD, from Johns Hopkins University in Baltimore, along with Richard Lilford, PhD, from the University of Birmingham in England. “Tension exists between scientists, who are dubious about the validity of many metrics, and policy makers, who have an obligation to protect the public,” wrote Pronovost and Lilford.
Patient outcomes are the bread and butter of performance measurement, with discharge data supplying investigators including hospitals, payors and government agencies. However, Pronovost and Lilford cited studies showing that different methods or data have often produced contradictory results regarding identical outcomes. This unreliability is aggravated by the subjective nature of many of these interpretations as well as frequently missing information.
The authors lambasted perhaps the most crucial patient metric: “The existing literature suggests that data on overall in-hospital mortality are more likely to misinform than to inform. This measure should be abandoned or used cautiously with other data until the science matures.”
Underlying performance measurement issues is a lack of standardization of metrics and methods between researchers and regulators. “This process should be replicated so that clinicians, policymakers and researchers collaborate to define, by metric, how good is ‘good enough,’” for a given metric, Pronovost and Lilford wrote.
A new system of performance measurement is necessary, the authors argued, and should be characterized by transparency and validity at all levels. Seeing better outcomes as linked to improved measurement, Pronovost and Lilford envisioned the process as a public good—a nationwide research and measurement project meriting public funding and an independent agency for coordination.
In place of the broad mortality figures and disparate standards, the authors called for more specific metrics, such as complications and mortality occurring in defined patient populations. A related, second recommendation would track these measures via “standardized surveillance” that could accurately identify the risks to patients of particular outcomes.
According to the authors, longitudinal process tracking is crucial to fielding valid results. And with the transformed system putatively producing public value, measurement should be undertaken at resource-rich and -scare facilities, whether by private researchers and statisticians or by a public agency.
As part of the overhaul, metrics themselves must be measured and evaluated—set against a backdrop of cost-benefit analysis, Pronovost and Lilford continued.
However comprehensive the redesign might become, changes at all levels of process measurement are necessary for the sake of patient care, the authors contended. “For the past decade, healthcare quality has largely sought quick fixes and run from science; the results are evident. Let us hope that efforts in the next decade embrace science instead.”