Health Data

Shortcomings in Speech-Recognition Technology

Professionals in health information management (HIM) and healthcare documentation have begun paying attention to the implications of speech-recognition and natural language processing (NLP) technology utilization.

NLP, a form of speech recognition, is a component of artificial intelligence (AI) and gives computers the capacity to identify, interpret, and use human language.

In theory, speech recognition enables healthcare delivery and documentation that is faster, more cost effective, and accurate than handwritten notes or human-based transcriptions.1 In reality, speech recognition technology has fallen short of its potential.

About 98,000 patients die annually due to preventable adverse events, some of which are attributable to incomplete or inaccurate medical documentation.2 Speech recognition accuracy is around 88 percent to 96 percent. However, speech recognition software vendors have told prospective healthcare facilities that the accuracy of the software is more like 99 percent.

Gauging Accuracy

In 2016, several Australian healthcare providers were engaged in a study to answer the question, “How accurate are dictated clinical documents created by speech recognition software, edited by professional medical transcriptionists, and reviewed and signed by physicians?”

Throughout 2016, the researchers collected 217 medical reports of assorted types that had been dictated by 144 physicians from two different healthcare facilities using speech recognition software.

Inaccuracies were marked as being ascribed to the software, the healthcare documentation specialist (HDS) who edited the document, or the physician who reviewed and signed the report based on the original voice files and review of the medical records.

The outcome was an eye opener. Among the 217 randomly selected medical reports from the two healthcare organizations, the error rate was 7.4 percent in the version generated by speech recognition software, 0.4 percent after HDS review, and 0.3 percent in the final version signed by the dictating physicians.

The study concluded that editing, and review by HDS staff and healthcare providers would be of great importance, since accurate medical documentation is critical to healthcare quality and patient health and safety and they had noted an error rate that was greater than seven percent by speech recognition-generated reports.3

From experience working with speech recognition, HDS staff have had to correct numerous mistakes made by this technology.

One example is an error made in the prescribed doses of medicines. Leaving a medication dosage mistake in a medical document that will go into a patient’s medical record can result in an adverse event. Additionally, instances of an incorrect medication name have been transcribed in a report by this technology and, if not corrected, can significantly harm patients.

Many medications have names that sound alike or are spelled similarly and can be transcribed in error by speech recognition technology, such as Celexa and Celebrex.

Celexa is prescribed for depression and anxiety. Celebrex is prescribed for pain. If either of these medications were given to the wrong patient due to an error that has gone undetected, there could be dangerous consequences. If not corrected before being placed into the patient’s medical record, there could be monetary consequences for the medical facility and anybody involved with this kind of mistake.

Real-World Consequences

What follows is an example of the consequences that can occur when errors created in a medical report are overlooked.

An article from January 2013 published in the Detroit Legal News pulled no punches when it noted that a patient “died because of a typo” In 2008.4

That patient, a life-long diabetic patient, was admitted to a hospital so that her dialysis port could be cleared of a blood clot. When she was discharged, her physician dictated her discharge summary, according to a news release.5

The physician was unaware that the medical center where the patient had been treated had contracted with an outside US-based medical transcription company that had also outsourced his dictation to two overseas medical transcription companies. The hospital administrators had consented to have some of their dictations outsourced in order to save two cents per line.

After being home for a day, the patient and her family realized skilled nursing was necessary, so she was admitted to a rehab facility for short-term care. The rehab facility asked a nurse at the treating hospital to provide the patient’s transfer orders and medication reconciliation sheet that had been reviewed and signed by the treating physician.

However, when told that those documents were being scanned into the patient’s electronic health record (EHR) and could not be transferred at that time, the nurse decided to print out the patient’s discharge summary instead of waiting, and used the information from that report to fill out the admission and medication orders.

This document contained several speech recognition-generated errors that had gone undetected by the outsourced medical transcriptionist or the quality assurance specialists who later reviewed the report.

Additionally, the treating physician had not yet reviewed or signed the document. The most critical error in this report was the dosage of Levemir insulin. The dictating physician had stated the dose as “eight units,” but the software had transcribed the dose as “80 units.”

The patient received the incorrect dose, which caused permanent brain damage resulting in cardiopulmonary arrest. The patient never recovered and died in the rehab facility a few days later.

The patient’s family filed a lawsuit naming the hospital that treated her, the rehab facility, the US-based medical transcription company, the two outsourced medical transcription companies, the patient’s treating physician, and the nurse who had administered the lethal dose at the rehab facility citing negligence and lack of critical thinking.

During the court case, every defendant claimed they could not be held responsible for the patient’s death, according to a presentation at an Association for Healthcare Documentation Integrity conference.6

At the end of the trial, the jury found only the treating hospital and the three medical transcription companies responsible for the patient’s death and awarded the patient’s family $140 million in punitive damages.

Disclaimers

Subsequent to this case, physicians and other healthcare providers began using disclaimers at the end of their transcribed medical reports to mitigate any responsibility on their part.

“Please note that this dictation was completed with computer voice? recognition software. Quite often unanticipated grammatical, syntax, homophones, and other interpretive errors are inadvertently transcribed by the computer software. Please disregard these errors. Please excuse any errors that have escaped final proofreading,” is an example of standard disclaimer language, according to a Pittsburgh-based law firm specializing in healthcare issues. In an informational article posted on its website, the firm responded to several inquiries about whether such disclaimers would offer healthcare providers protection against any liability if a speech recognition-generated error caused any harm to a patient.

The firm said that the disclaimers would probably increase the risk of liability instead. A Medicare contractor also responded that the physicians are responsible for all of the information that goes into a patient’s medical report and that using disclaimers in their reports does not remove that responsibility.7

Once signed, these medical reports are considered legal documents. Therefore, before signing, the document should be carefully reviewed by the dictating healthcare provider.

Several HDSs created a Facebook page to show examples of errors made by speech recognition software they have corrected. Table 1 (below) provides a few examples:

Table 1: Errors made by speech-recognition software
Provider: The patient had influenza A.

SR: The patient had a pleasant day.

Provider: The procedure is successful.

SR: The procedure is not successful.

Provider: Lipitor 20, two pills a day.

SR: Lipitor 22 pills a day.

Provider: Single lung transplant.

SR: Sing-along transplant.

Provider: Aspirin 325 mg p.o. daily.

SR: Digoxin 125 mcg p.o. daily.

Provider: Pravastatin 40 mg a day.

SR: Gabapentin 200 mg a day.

Provider: Nystatin.

SR: NIASPAN.

Provider: ALLERGIES: SULFA. (Next).

SR: ALLERGIES: XOPENEX.

Provider: Pulmonary vein isolation system.

SR: Pulmonary bagel isolation system.

Provider: Arterial insufficiency. SR: Cheerio insufficiency.

Returning exclusively to traditional dictation and transcription is unlikely. A 2013 panel discussion suggested that organizations like the Association for Healthcare Documentation Integrity (AHDI) and the American Health Information Management Association (AHIMA) can work together to engage HIM and healthcare documentation professionals to mitigate incidents like this.8

AHDI and AHIMA should work together to employ The Joint Commission so that this commission could strengthen its policy on physician confirmation to its certified hospital base.

The panel also suggested that these organizations could clarify guidelines concerning legal liability in these professions.

These organizations have worked together recently to provide a white paper and toolkit that updates the standards and best practices for healthcare documentation quality assessment and management.

It is reassuring to see these associations collaborating to ensure accuracy and completeness of medical reports. Their continued involvement will ensure the best outcome for all patients.

Notes
  1. Parente, Ronaldo, Ned Kock and John Sonsini. “An Analysis of the Implementation and Impact of Speech Recognition Technology in the Healthcare Sector.” Perspectives in Health Information Management. June 18, 2004. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2047322/.
  2. Matthews, Kayla. “Why Medical Dictation is Still Better Than Voice Recognition…For Now.” Health IT Outcomes. December 20, 2019. https://www.healthitoutcomes.com/doc/why-medical-dictation-is-still-better-than-voice-recognition-for-now-0001.
  3. Zhou, Li, Suzanne V. Blackley, and Leigh Kowalski. “Analysis of errors in dictated clinical documents assisted by speech recognition software and professional transcriptionists.” JAMA Network Open. doi:10.1001/jamanetworkopen.2018.0530.
  4. Stephenson, Correy. “Medical facility outsourced transcription of doctor's orders to company in India.” Detroit Legal News. January 1, 2013. http://www.legalnews.com/detroit/1371036.
  5. Crumbie, Joan. “Jury holds hospital and transcription company responsible for fatal medication error: $140 million verdict.” PR Newswire. December 17, 2012. https://www.prnewswire.com/news-releases/jury-holds-hospital--transcription-company-responsible-for-fatal-medication-error--140-million-verdict-183799281.html.
  6. Sims, Lea. “The Juno Case: A Sentinel Event for the Transcription Sector” at the Association for Healthcare Documentation Integrity (AHDI) Annual Conference, August 3, 2013.
  7. Horty Springer. Question of the week. March 31, 2016. Available at: www.hortyspringer.com/question-of-the-week/march-31-2016.
  8. AHIMA and AHDI. Healthcare Documentation Quality Assessment and Management Best Practices White paper and Toolkit. https://www.ahdionline.org/general/custom.asp?page=qa.

 

Brenda Wynn (brendawynn@embarqmail.com) has been working in the healthcare documentation industry for over 18 years, most recently as a Quality Assurance Specialist for Nuance Transcription Services, Inc. She’s involved with several AHDI committees including Ethics and Research & Development Team as well as the AHDI Advocacy Alliance Task Force.