Barriers to useful comparisons of CDS

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon
 - Wall of Computers

When it comes to implementing clinical decision support (CDS) systems, system purchasers often lack enough available information to study, compare and contrast systems, according to an article published online Feb. 9 in the Journal of the American Medical Informatics Association.

The article authors, including Gaurav Jay Dhiman of the University of Miami Miller School Of Medicine, wrote that in-depth research and comparisons between systems would help clinicians and purchasers make informed decisions about what programs to install for their organizations.

“Although past normative analyses have addressed regulation of [CDS], alert fatigue, and drawbacks to increased liability for users, insufficient attention has been paid to the consumer decision process behind selecting a preventative, diagnostic and treatment [CDS],” Dhiman and colleagues wrote.

Dhiman and team focused on obstacles to comparative CDS studies, which included the fact that many of the CDS systems were focused on single medical systems and studies that spanned across multiple sites were rare; programs quickly grew outdated and required updates that might hinder studies; and finally, that inpatient and large academic hospitals were most frequently studied, making findings relevant to smaller, private settings less translatable.

The article provided several recommendations for purchasers and researchers:

  • More guidance for purchasers: Dhiman and colleagues called for more guidance tools provided by governments, nonprofit organizations and private organizations to help purchases. They recommended The Agency for Healthcare Research and Quality’s Health IT Evaluation Toolkit as a model.
  • Research: Despite the fact that measuring limited outcomes in comparing CDSSs has proven difficult, recognition of this fact shouldn’t be a barrier to future studies. “At the least, a larger number of independently funded studies focused on limited outcomes are needed, given the lack of financial incentives for software and system comparisons,” Dhiman and colleagues wrote.
  • Transparency: The authors assert that unbiased disclosure of adverse CDS events is a vital component in allowing for more meaningful studies in the field. “The disclosure of each and every trivial defect may be too cumbersome for companies to report, and so is not necessary; however, adequate disclosure done in good faith should be encouraged, and standards should be developed to support and guide it,” the authors wrote.

Dhiman et al concluded that removing informational barriers while at the same time promoting more guidance from comparative research can foster the evolution of evidence-based tools like CDS and at the same time empower system purchasers to make better decisions for their organizations.