Comparative effectiveness research can often lead to improved medical practice when a procedure is shown to be effective, but what happens when a procedure, especially a popular one, is shown to be ineffective?
This is the question asked in an article appearing in the December issue of Health Affairs, as the authors–Katharine C. Wulff, Sommer Scholar at Johns Hopkins Bloomberg School of Public Health in Baltimore; Franklin G. Miller, PhD, of the National Institutes of Health in Bethesda, Md.; and Steven D. Pearson, MD, also of the National Institutes of Health–conducted a series of interviews that looked at the story of percutaneous vertebroplasty to see how “negative” research findings are interpreted in the U.S.
First performed in 1984, vertebroplasty has been covered by Medicare since 2001 and by almost all private health plans since 2007, according to the authors. In 2009, however, the results of two studies were published by the New England Journal of Medicine that called into question the effectiveness of vertebroplasty. Those studies demonstrated that vertebroplasty produced no better results than a sham procedure where a needle is used on the patient, but no cement is injected.
The results of the two studies sparked a small controversy as some professional organizations vocalized their support of the procedure while questioning the methods of the study. Wulff et al cited a statement from the Society of Interventional Radiology that said, “based on the…weaknesses in the studies, we believe it is premature—and possibly incorrect—to conclude that vertebroplasty is no better than a sham.”
Most payors did not adjust their coverage policies, and those that did quickly backpedaled, according to the authors. In December 2009, Aetna gave notice that it would rescind coverage for the procedure, but then retracted the policy change after criticism from network physicians. Medicare did not launch a national coverage determination process, but in May 2010, a regional Medicare contractor, Noridian Administrative Services, issued a draft policy that rescinded vertebroplasty coverage. Noridian, too, backed away from the policy shift and left coverage in place.
Wulff et al said the vertebroplasty saga provides several lessons. The first is that coverage before rigorous evidence is gathered can create barriers to future randomized trials and insurance policy adjustments. After covering the procedure for years, any change could be seen publically as cutting off access to care.
“Coverage of the procedure before rigorous evidence emerged created a negative feedback loop: The more widespread the coverage, the more difficult it became to complete the randomized trials,” wrote the authors.
Another issue is the difficulty of interpreting negative studies. The authors wrote that it’s difficult to prove a negative and the studies on vertebroplasty were criticized because of the possibility that a subgroup of patients who may not have been identified can benefit from a procedure.
The culture of interventional specialists also can slow the wider recognition of a negative study result. Vertebroplasty requires manual skill, and the authors noted that clinicians who often perform the procedure questioned the expertise of the study practitioners. In fact, the authors noted that there was skepticism based on conflicts of interest from all sides on how to interpret the findings of the trials.
“Researchers and payors believed that professional societies were motivated to preserve a lucrative practice; payors thought that clinicians and the public would view any change in coverage as an attempt to save money; and clinicians perceived researchers to be disconnected from the clinical implications of their work,” wrote the authors.
Wulff et al offered four pieces of advice to better integrate the results of comparative effectiveness research:
- Strengthen evidence requirements–Medicare and other insurers need to work with clinical researchers on developing transparent and consistent standards requiring evidence before unrestricted coverage is granted to a new procedure.
- Engage clinicians and patients in research design–The authors suggested better engagement from stakeholders, particularly patients and clinicians, will help application of new evidence.
- Reward evidence-based practice–Financial incentives can push clinicians to defend interventions in the face of negative evidence. The authors suggested bundled payments or global