Read the Original Article at http://www.informationweek.com/news/showArticle.jhtml?articleID=232500679
Unfortunately, as a recent paper in the Journal of the American Medical Informatics Association noted, medical problem lists are often incomplete. In a previous study of a primary care network affiliated with Brigham & Women's, the authors found that "completeness ... ranged from 4.7% for renal insufficiency or failure to 50.7% for hypertension, 61.9% for diabetes, to a maximum of 78.5% for breast cancer, and other institutions have found similar results."
To increase the comprehensiveness of the diagnosis list, Adam Wright, MD, assistant professor of medicine at Brigham & Women's and Harvard Medical School, and his colleagues designed a "problem inference" tool that uses billing codes, lab results, medications, and other data to infer the missing diagnoses.
[ Most of the largest healthcare data security and privacy breaches have involved lost or stolen mobile computing devices. For possible solutions, see 7 Tools To Tighten Healthcare Data Security. ]
After validating the tool for 17 health conditions, the researchers set out to prove that it worked in the real world. They conducted a randomized trial involving 11 primary-care clinics in the Brigham & Women's network. The practices included 28 clinical areas, which were evenly divided between intervention and control sites.
The results of the six-month trial were promising. The physicians in the intervention sites accepted 41% of the 17,000 alerts about missing diagnoses that they received. They also added 70% of the problems in the alerts to their problem lists. Including new and old problems, the intervention sites added nearly three times as many diagnoses to the lists as the control sites did.
Why were only 41% of the alerts accepted? Wright offered two reasons: First, some physicians rejected the suggestion that a patient had a particular problem. And second, many clinicians were in a hurry and didn't have time to check out the alerts, which is consistent with the fact that many clinicians complain about alert fatigue while using EHRs.
Wright believes that technology could further increase the comprehensiveness of problem lists. Researchers at the University of Utah, he noted, have used natural language processing (NLP) software to parse the free text in electronic records. Because physicians often mention diagnoses in transcribed dictation or other documents, NLP offers another way to spot problems left off of problem lists.
The NLP approach would also resolve an issue that the Brigham & Women researchers encountered: The inference method cannot distinguish between diagnoses that involve similar tests or medications, such as asthma and COPD. "There's a tremendous promise in using NLP," Wright said.
In any case, the method that Wright's team devised is good enough that Partners Healthcare, which includes Brigham & Women's Hospital, has expanded its use to primary care and specialty clinics in the Massachusetts General Hospital network.
Eventually, Wright said, he'd like to conduct a multi-center trial of the method that involves other organizations around the country. Meanwhile, his team has published its knowledge base, "and we'd love to see EHR vendors adopt it," he said.
More complete problem lists would improve care across organizations and communities, Wright noted. They'd be especially helpful to physicians in emergency departments and urgent care centers, and could also benefit doctors treating new patients they encounter when on call. And when physicians use health information exchanges to trade patient data across organizations and regions, complete problem lists could make a big difference in the ability to provide first-rate care.
When are emerging technologies ready for clinical use? In the new issue of InformationWeek Healthcare, find out how three promising innovations--personalized medicine, clinical analytics, and natural language processing--show the trade-offs. Download the issue now. (Free registration required.)