The California Medical Association (CMA) has announced its withdrawal from a Blue Shield of California initiative to rate physician performance due to the program's “serious and disturbing flaws in how data are collected on physicians that result in gross inaccuracies." And, in recent interviews, Blue Shield defended its program, while CMA reiterated its resistance to the payor's method of ranking state physicians.
Blue Shield of California issued a response letter to CMA on April 16, after the association announced it was pulling out of the payor's Blue Ribbon Recognition Program, a program from the California Cooperative Healthcare Reporting Initiative (CCHRI), a healthcare collaborative that includes Blue Shield. The program intends to publish physician ratings beginning June 1 to publicly recognize physicians who scored above average in up to eight measures, including preventive screening and diabetes, with the intent to allow the public to identify high-volume, high-performing doctors online.
Since 2006, the California Physician Performance Initiative (CPPI), a multi-stakeholder program run by CCHRI, has collected data to measure and report on the performance of California's physicians.
"We participated in this process for two years," stated Andrew LaMar, spokesman for CMA. "Because medicine is an art and science, it's hard to come up with a fair, accurate assessment of a physician's work since many factors are involved."
“It seems to us that CMA is not in favor of public reporting of physician quality measures,” said Michael-Anne Browne, MD, medical director for quality at Blue Shield of California. “The way we have gone about [the Blue Ribbon Program] is consistent with industry standards using measures by which health plans and medical groups in California are publicly reported, and we have tried to do it in a way that has enough data so we can have a statistical reliability about our measurement.”
LaMar acknowledged that Blue Shield of California has made changes to past concerns that CMA has had and that the process has been long with multiple gives and takes. However, LaMar noted that its criticism wasn't directed at Blue Shield itself, but rather at the current ranking system. “[The letter is] our accurate representation of where the process is at this moment.”
According to Browne, how a clinician takes care of every diabetic patient is not necessary to get a read on how the doctor in general works with diabetics. Working with measurement specialists, Blue Shield determined that patient sample size is sufficient to provide statistical reliability per measure using “nationally recognized benchmarks” of 0.7 percent of statistical reliability. Depending on the measure, it takes about 30 patients to reliably rate a doctor, stated Browne.
Patient data are linked to applicable physicians (mainly adult primary care, endocrinology, OB/GYN, cardiology and rheumatology clinicians), according to Browne. Out of the 60,000 contracted doctors with Blue Shield of California, 12,000 were eligible to be scored by being an applicable specialist and seeing enough patients, she said, and 6,000 will be earning one or more blue ribbons on specific measures.
LaMar stated that he feels linking the patient data to the physician regardless of what a patient does and not factoring in whether the patient follows through is unfortunate. "There's a variety of reasons patients may not follow through on what a physician suggests or recommends. The patient may not want the treatment or may be uninsured or unemployed and can't afford what treatment a physician suggests," said LaMar. "It just doesn't seem logical not to consider this information whatsoever in these ratings."
In the letter to CMA, Blue Shield of California asserted that physicians are not penalized; only acknowledged for superior performance. “The CPPI project and Blue Shield do not penalize physicians with noncompliant patients whose rates are not 100 percent,” stated the letter to CMA. “Blue Shield simply provides positive public recognition for those who have sufficient volume to be credibly rated and have better results, including those whose scores fall well below 100 percent.”
Not wanting “to draw a definitive line in the sand,” a “better performer” level was set at the 65th level of performance gauging the top 35th percentile of high volume as high-performance clinicians, stated Browne. “Then, we applied a buffer