Another Disappointing Study about Clinical Decision Support

Many people believe that computer-generated alerts and reminders can improve the quality of care. Indeed, these presumed gains in quality represent a key rationale for the federal government’s HITECH incentive program to accelerate EHR utilization.

The problem is that numerous recent studies of clinical decision support have shown mixed results at best.

In the latest one, Kaveh Shojania and colleagues at the University of Toronto reviewed 28 clinical trials of computer generated alerts that were sent to physicians during the course of e-prescribing or charting. Overall, they found CDS improved process of care performance by a measly 4.2%, a finding they deemed “below thresholds for clinically significant improvement.”

Shojania’s group scanned MEDLINE, Embase, CINAHL and other databases for randomized trials that evaluated the efficacy of computer-based alerts delivered to physicians at the point of care. Nineteen of the studies were based in the US. Twenty occurred in outpatient settings.

The scientists focused on process of care improvements rather than clinical outcomes in order to determine whether the alerts actually changed provider behavior. The degree to which such changes ultimately improve patient outcomes would depend upon the strength of the association between the targeted processes and clinical outcomes.

The disappointingly small median improvement of 4.2% extended to various subgroups as well. For example, prescribing behaviors improved by 3.3%, adherence to target vaccinations improved by 3.8%, and test-ordering behavior improved by 3.8%. Similarly, physicians who were alerted about blood pressure elevations reduced their patients’ systolic blood pressure by a pathetic 1.0 mm Hg more than those who did not receive alerts.

The dismal findings could not be attributed to the year in which the study was published or the country where the study was carried out. Inpatient alert systems fared a bit better than outpatient interventions (median improvement 8.7% vs. 3.0%), but these findings were skewed by favorable results at one institution (Brigham and Women’s Hospital) that had a well-developed, “homegrown” computerized order entry system used primarily by medical residents.

“Until further research identifies (clinical decision support) features that reliably predict clinically worthwhile improvements in care,” the authors conclude, “implementing these technologies will constitute an expensive exercise in trial and error.”

In my next 2 posts (Wednesday and Friday), I’ll propose a research agenda that hopefully can shed light on this mess. Perhaps it can help us design CDS systems that achieve their tremendous potential as quality improvement tools.

Glenn Laffel, MD, PhD
Sr. VP Clinical Affairs, Practice Fusion

Practice Fusion draws from a community of doctors, medical experts, and digital health influencers that contribute to blog posts. Read all posts from our guest writers

Read all posts by Contributing Writer