Incentives to hospitals relating to computerized physician order entry (CPOE) should include some type of demonstration that clinical decision support (CDS) is actually being employed, instead of being based solely on whether CPOE is in use, according to a study in the April edition of HealthAffairs.
Jane Metzger, principal researcher at CSC Healthcare, in Waltham, Mass., and colleagues studied a sample of 62 U.S. hospitals between April and August 2008 that used a simulation tool designed to assess how well safety decision support worked, then applied to medication orders in CPOE.
The impetus for developing the assessment tool was initially the standard developed by the Leapfrog Group, an employer group that seeks to accomplish breakthroughs in hospital patient safety through a combination of public awareness and rewards to higher-quality providers, the authors stated.
“The Leapfrog standard includes two elements of meaningful use to ensure CPOE has been implemented in such a way to improve medication safety. According to the standards, physicians and other licensed providers must enter at least 75 percent of medication order using computerized entry. Clinical decision support must also be able to avert at least 50 percent of common, serious prescribing errors,” Metzger and colleagues wrote.
The simulation detected only 53 percent of the medication orders that would have resulted in fatalities and 10-82 percent of the test orders that would have caused serious adverse drug effects, the authors said.
The mean hospital scores were higher for orders that would lead to adverse drug events that can be addressed by basic decision support (61 percent) than for those requiring more advanced CDS (25 percent), the study found.
Drug-to-diagnosis contraindication, including pregnancy, potential adverse drug events were only detected 15 percent of the time. The adverse drug event category detected most reliably was drug-to-allergy contraindication (83.3 percent detected). Drug-to drug interaction adverse drug events were detected 52.4 percent of the time. “Much higher scores were obtained for each of the categories addressed by basic clinical decision support than for those requiring advanced tools,” wrote the team.
“[W]e found significant variability in the use of decision support to detect and provide advice or an alert concerning a medication order that would result in serious harm to an adult patient. Some hospitals performed very well, while others performed very poorly,” the authors found. “In addition, the studied hospitals as a group were using basic decision support far more than the more advanced tools needed to detect types of order that are major contributors to adverse drug events in chart-review studies.”
The authors noted that there was limited information available for exploring contributing factors to the vendor software in use, teaching status, hospital size by number of beds and whether or not the hospital was part of a health system.
Metzger and colleagues also remarked that when using a multiple regression model, vendor choice was significantly correlated with performance. “[T]here is good statistical evidence to suggest that choice of vendors does have some positive effect on performance. However, vendor choice accounted for only 27 percent of the total variation that we observed in performance,” they wrote.
“These findings point to the importance of evaluations of the use of clinical decision support by hospitals to help guide their continuing efforts to improve medication safety,” the authors concluded. “The broader use of this type of assessment of meaningful EHR use should be explored for other software applications used in direct clinical care.”