Regulatory and Health Industry

Artificial Intelligence and Machine Learning Help Treat EHR ‘Click Fatigue’

After years of physician claims of burnout and frustration related to electronic health records (EHRs), new technologies powered by natural language processing (NLP), artificial intelligence, and digital scribes are making inroads in physician satisfaction.

Yale’s School of Medicine made two changes to the way physicians chart on EHRs, with a focus on reducing clicks and limiting keyboard-and-mouse user interface. One solution allows physicians to use their ID badges to tap in and out of the system after one initial username and password login at the start of their work day, according to the American Medical Association’s AMA Wire.

“This was a daily annoyance for our doctors,” Yale Medicine Chief Medical Officer Ronald Vender, MD, told AMA Wire. “It had a disproportionate effect above and beyond the time with just the annoyance factors. Addressing this psychologically, as well as time savings, has been a huge win.”

This change alone saved physicians between six and 20 minutes each day, which adds up to about 20 to 140 logins per physician each day. Another change Yale made was adding voice recognition software to their EHR, which helps doctors input their notes much more quickly at the point of care with patients. Physicians like that this gives the patient the opportunity to see what’s written about them in the record, which gives them the chance to provide instant feedback.

“I type very fast and I thought, ‘I don’t need voice recognition,” Allen Hsiao, MD, the chief medical information officer at Yale, told AMA Wire. “I quickly found that I have better notes, higher quality, I put in things that I would have thought isn’t worth the time and effort to type, but I will now speak them. It is easier to speak them even for people who type well.”

At Yale, voice recognition has reduced physician time spent charting by 50 percent.

Tech Companies Find New Niche
Major tech companies such as Google are starting to stake out territory in this space. Google Brain, which is part of Google’s artificial intelligence division, which is also known as Medical Brain, “would likely take advantage of the complex voice technologies Google already uses in its Home, Assistant, and Translate products,” reported Jillian D’Onfro and Christina Farr of CNBC.

Google’s Medical Brain conducted a digital scribe study with Stanford Medicine. The digital scribe also uses voice recognition and machine learning tools to help doctors automatically fill out EHRs from patient visits.

As Stanford investigators explain it, the voice recognition technology “listens in” during a patient encounter to help seek out relevant details the physician will use while charting, CNBC reported.

“This is even more of a complicated, hard problem than we originally thought,” lead Stanford researcher, Dr. Steven Lin, said. “But if solved, it can potentially unshackle physicians from EHRs and bring providers back to the joys of medicine: actually interacting with patients.”

[author] [author_image timthumb='on']/Portals/0/uploads/content_hub/Mary-Butler-Portrait.jpg[/author_image] [author_info]Mary Butler is the associate editor at Journal of AHIMA.[/author_info] [/author]