Dollars & Cents: Rads Redefine Value

For the last few decades, radiology providers have survived (and thrived) in a volume-driven healthcare market. However, healthcare services and payments are in the midst of a massive upheaval and transitioning from fee-for-service reimbursement to value-based purchasing.

As imaging stakeholders peer into the future and connect the dots from fee-for-value reimbursement to physician compensation, it becomes clear that conventional approaches to productivity, performance and compensation will not suffice. In fact, business success could hinge on incentivizing behaviors that may go against the grain for many radiologists. A handful of thought leaders are suggesting new approaches to performance evaluation, aiming to drive change by quantifying performance measures like community service, collaboration and patient satisfaction.

“If radiology practices wait until the new rules are implemented, there will be a learning curve to implement the desired behavior. We should incentivize it now,” says Richard Duszak, Jr., MD, CEO of the Harvey L. Neiman Health Policy Institute in Reston, Va., which was created by the American College of Radiology to study the role of radiology in evolving healthcare delivery and payment systems.

Consider the typical response to an incidental pulmonary nodule. Despite well-established guidelines for follow-up, many radiologists rely on a personal protocol and issue a blanket recommendation for follow-up imaging. The fee-for-service model encourages this behavior, as there is no incentive to follow guidelines. In contrast, value-based purchasing could embed in the reimbursement formula outcomes, such as improved access, decreased costs and less radiation exposure. The latter approach can discourage inappropriate use.

Reuben Mezrich, MD, PhD, professor and former chair of diagnostic radiology and nuclear medicine at University of Maryland School of Medicine in Baltimore, offers a more immediate rationale for alternate metrics. It’s not uncommon for radiologists to suffer from tunnel vision and get so busy focusing on image review and report generation that they forget about their real job—clinical consultation. Yet, practices face increasing local, regional and global competition, and nearly all deliver adequate image-turnaround services. Successful players differentiate themselves from the crowd.

“Practices have to offer products beyond fast turnaround and report generation. They have to provide service to clinicians, which become a part of the culture of the practice,” says Mezrich.

The trick is incentivizing clinical collaboration. In many situations, one radiologist might act as the practice powerhouse, rapidly reviewing images and churning out reports, while another focuses on reviewing cases with clinicians. The latter service currently does not translate into compensation. “But if you make the clinician happy, he will send you three more cases,” says Mezrich. A holistic, performance-based compensation system that recognizes and rewards such behavior can incentivize more radiologists to prioritize service.   

There are many monikers for these emerging models: team RVU [relative value unit], nonclinical RVU, academic RVU and performance-based incentive compensation. Whatever term is used, a few common threads link these models. “This is a whole new way of managing labor and compensation, distributing work and valuing efficiency,” says Marcia C. Javitt, MD, radiologist at Walter Reed National Military Medical Center in Washington, D.C. In these systems, practices use clinical and performance data from a dashboard, physician report card or metrics to monitor, modify and reward professional activities. The ideal is a flexible and transparent system.

 

Performance measures at a glance

RVUs: Appealing but flawed

Two separate, but related, issues cloud the question of performance evaluation. “I suspect half of radiology practices have seriously embarked on some sort of physician performance assessment, and most are focused in RVU-based productivity analysis. A minority of groups have started looking at other metrics such as quality, safety and efficiency,” says Duszak.

The RVU is a familiar productivity formula. The Centers for Medicare & Medicaid Services (CMS) determine a relative value for each imaging exam or procedure and base reimbursement on that figure. This translates into certain advantages as a radiology performance measure. RVUs can be extracted relatively easily from billing data. Plus, they are straightforward, easily understood and universal.    

The strictly RVU-based path to compensation and performance measurement, however, may lead physicians down a slippery slope. “The RVU model strongly favors cross-sectional imagers because of the high CPT [current procedural terminology] value of those studies,” says David M. Yousem, MD, MBA, director of neuroradiology at Johns Hopkins Hospital in Baltimore. Thus, two radiologists may work equally hard yet have completely different RVUs because of the types of studies they interpret.

Tying evaluation solely to RVUs can incentivize undesirable behavior, such as cherry-picking high-RVU cases or procedures and neglecting practice-building activities like service and leadership. These downsides, coupled with the impending value-based market, are sparking some provocative ideas among thought leaders.

Teamwork: A missing link

As healthcare shifts from an autonomous practice model to collaborative models like accountable care organizations and medical homes, physicians need to shift toward a team-based approach to care, says Javitt. She advocates the team RVU, a model characterized by teamwork, citizenship and a commitment to a whole that is greater than the sum of its parts.

Individual professional excellence remains a key ingredient, but collective excellence and responsibility matter as well. “There are huge challenges in creating [these types of] performance indicators. There are no well-established or long-term benchmarks for group quality measures,” explains Javitt.

Nor is there a one size-fits-all formula.

“The ideal incentive plan agrees with, comports with and supports the practice’s mission and vision,” explains Yousem. It seems simple, but the calculus of connecting mission and vision to compensation is quite complex. Defining an objective measure to link an individual physician’s performance with practice goals like increased patient satisfaction is a challenge.

Metrics, says Yousem, should be SMART (specific, measurable, achievable, relevant and time-bound). Take for example an academic practice focused on increasing international collaboration. The practice could incentivize physicians to develop an online education program and measure the number of international residency programs that subscribes to or views the program over a defined period of time and their satisfaction scores.

To move beyond the basic clinical focus, Duszak and Lawrence R. Muroff, MD, of Imaging Consultants in Tampa, Fla., coined the term “nonclinical RVU,” in the July 2010 issue of the Journal of the American College of Radiology. The nonclinical RVU represents the sum of administration and leadership; practice, hospital and community service; professionalism; and quality and safety or:

ncRVU = RVUA + RVUS + RVUP + RVUQ

Depending on the practice’s mission and goals, various inputs could be weighted to reflect organizational priorities. As Yousem noted, inputs should be quantifiable. Quality and safety, for example, could be measured in terms of report signature times or the percentage of critical results communicated within a certain time frame.

Nonclinical RVUs can inform a report card for performance review or be used as tool for determining bonus incentives.

Radiologists have a nearly clean slate for developing performance-based metrics. “Thoughtful policymakers are not pushing one rigid system. Value-based purchasing models give physicians a set of options that can work in a range of individual practices. Each practice has to pick and choose its priorities and tailor radiologists’ expectations accordingly,” says Duszak. For example, a practice might emphasize safety and clinical collaboration, and weigh metrics to incentivize those outcomes more heavily than other factors. 

Early templates: The academic RVU

It’s not surprising that academic medical centers have been the first to dip their toes into alternate productivity measures. In 2006, the University of Maryland School of Medicine developed and implemented an academic RVU system to measure academic productivity.

“It’s a challenge to encourage more research and more academic productivity, yet, at the same time, realize not every physician contributes to the practice in the same way,” explains Mezrich. Some radiologists write papers, others pursue grants, while some focus on teaching, and some avoid those activities but emphasize clinical work that generates revenue. Until recently, academic departments lacked an objective way to measure varied contributions.

Mezrich and colleagues devised a system comprised of a departmental policy and related software program to recognize different components of the academic RVU. The point system measures these components, with the total academic RVU calculated by summing publications, administrative and community service, teaching and research. Each factor is weighted, and can be adjusted based on the input of faculty or a change in strategic vision.

During the final years of his tenure as department head, Mezrich used the data to inform annual salary increases, basing 60 percent of the salary increase on the clinical RVU, 30 percent on the academic RVU and 10 percent on other factors.

Mezrich stresses simplicity, transparency and collaboration. The program pulls information from radiologists’ curriculum vitae. Radiologists can view their academic RVU at any time, and the department distributes a free CD with the program to any others interested in implementing a similar model.

Five years after the program debuted, the university continues to rely on the system, now dubbed Performance-Based Incentive Compensation. There have been a few tweaks. Mezrich observed that radiologists found it challenging to change behaviors based on annual feedback. Now the department runs the numbers twice annually. He says increasing the frequency and including the analysis with quarterly performance evaluations could help radiologists internalize these goals.

Another potential challenge to developing and implementing performance metrics is radiologists’ susceptibility to the Lake Wobegone Effect. Most, if not all, believe they are above average. They also are accustomed to a tremendous amount of autonomy in how they practice. Add to this physicians’ cultural resistance to performance evaluation, and the path to alternate RVUs seems littered with potential roadblocks.

Duszak emphasizes a proactive approach to the process of developing performance measures. Practices should:

Tie report cards and evaluations to metrics that physicians can control;

Involve the rank and file in matching priority metrics to the institution’s mission; and

Designate a performance committee of four to eight members to balance group buy-in with an operationally functional process.

Additional inputs

The baseline shift to alternate performance evaluation and compensation represents a fairly monumental change. For ambitious thought leaders it is a first step.

The team RVU concept, according to Javitt, extends beyond compensation and includes quality metrics for patient outcomes. Radiology groups need to consider whether outcomes, such as length of stay or surgical results, support the imaging algorithm used to work up the patient. This type of analysis requires practices to feed retrospective patient data into the front end of informatics system, so that population-based patient data can be used to guide imaging orders. “In other words, it’s the concrete implementation of evidence-based medicine.” HI

Trimed Popup
Trimed Popup