Revenue Cycle

How Do You Distinguish Clinical Documentation?

The introduction of technology into the healthcare industry has disrupted the health information (HI) and clinical documentation integrity (CDI) profession more than could have been anticipated. Yes, documents are more legible, but the integrity of documents is increasingly questionable because it is difficult for those translating the health record into reportable healthcare data to distinguish “clinical documentation” from other text within the health record.

For the purpose of this article, we will define “clinical documentation” as an intentional diagnostic statement by a practitioner. We base this definition on the ICD-10-CM Official Guidelines for Coding and Reporting, which state, “The assignment of a diagnosis code is based on the provider’s diagnostic statement that the condition exists,” as well as the requirement that “the term provider is used throughout the guidelines to mean physician or any qualified health care practitioner who is legally accountable for establishing the patient’s diagnosis.” Unfortunately, we can no longer make the assumption that all conditions that appear within a document authored by a provider are “clinical documentation” and/or reportable diagnoses. This article will explore common problematic scenarios.

How Did We Get Here?

The Centers for Medicare and Medicaid Services (CMS) established the “meaningful use” Electronic Health Record (EHR) Incentive Programs (now known as the Promoting Interoperability Programs) in 2011 to encourage the healthcare industry to adopt, implement, upgrade, and demonstrate meaningful use of certified electronic health record technology using a three-stage approach through financial incentives.1

Stage 1 requires hospitals and eligible professionals to complete a set of core objectives, which included: “Maintain an up-to-date problem list of current and active diagnoses based on ICD-9-CM or SNOMED CT.”2,3 However, a 2016 article in the Journal of Informatics in Health and Biomedicine found, “The lack of agreement on the definition of a problem list leaves each institution to determine its own problem list policy, which makes the interoperability of problem lists between institutions more difficult.”4

Not only do problem lists remain poorly defined within the healthcare industry, but tying the problem lists to a common dictionary does not promote consistency in how conditions are defined because EHRs allow organizations to map a code to their own terminology.5

Most organizations chose physician convenience over diagnosis precision, causing most problem lists to be inaccurate. Furthermore, these diagnosis code titles often fail to reflect relevant clinical information and can contribute to mistakes like under-coding—where the practitioner selects a diagnosis that is less precise than what was actually addressed, including the selection of symptoms rather than a disease—or over-coding, where practitioners select a diagnosis that is more precise than assessed.6 Unfortunately, these problems persist and are often exacerbated by CDI and coding technologies that employ natural language processing (NLP) technologies that skim text within the EHR and map phrases to applicable ICD-10-CM codes without sufficient context. In other words, NLP can often be triggered by terms that are embedded in the EHR as a result of meaningful use, such as a code title rather than a diagnosis.

The prevalence of embedded code titles has also moved beyond the problem list, as many EHRs insert codes and their corresponding code titles within a physician assessment. This type of practice makes it more difficult to exclude these code titles from the coding process, as they may not be easily recognizable since they may appear within the context of the physician’s documentation. This leads some to assume the code title precisely reflects the provider’s clinical finding. However, these code titles are not a diagnostic statement and should be considered neither clinical documentation nor the basis for ICD-10-CM code assignment.

One of the most common examples of imprecision is when a provider chooses an unspecified code, like pneumonia, but the corresponding diagnostic statement supports a more specific code or the opportunity to query for a more precise pneumonia code. The inclusion of the auto-populated unspecified code should not be considered conflicting with the diagnostic statements that support a more specific code or support a query for a more specific code. More than likely, the unspecified code for pneumonia is at the top of a dropdown list or pick list and the practitioner is selecting an associated diagnosis based on convenience rather than investing the time that would be required to select the most accurate code, which is not their responsibility.

According to Holmes, “In general, practitioners find that entering standardized data rather than free text typically takes more effort (e.g., more clicks) and often does not express the data in a way that best matches their thought processes.”7 So, it is no surprise practitioners are often populating problem lists on the basis of speed and efficiency over precision.

Foundations of Coding

The level of precision and specificity related to provider documentation is vital to accurate medical coding. Coding professionals utilize provider documentation and critical thinking to assign medical codes using a classification system defined by Medicare Severity Diagnosis Related Groups (MS-DRG) or All Patient Refined Diagnosis Related Groups (APR-DRG). They depend on the documentation of providers to be able to accurately code a condition, which is reported for billing, quality, and statistical purposes.

To report a condition, coding professionals must follow the ICD-10-CM Official Guidelines for Coding and Reporting. The guidelines reference the Uniform Hospital Discharge Data Set (UHDDS) that defines “Other Diagnoses” as “all conditions that coexist at the time of admission, that develop subsequently, or that affect the treatment received and/or the length of stay. Diagnoses that relate to an earlier episode which have no bearing on the current hospital stay are to be excluded.”8

Additionally, the ICD-10-CM guidelines direct a coding professional to code all clinically significant conditions and defines a condition as clinically significant (i.e., conditions that affect patient care) as those that require:

  • Clinical evaluation; or
  • Therapeutic treatment; or
  • Diagnostic procedures; or
  • Extended length of hospital stay; or
  • Increased nursing care and/or monitoring.

Determining what documentation can or should be used for coding is becoming increasingly difficult, as providers use the EHR in a variety of ways. Some keep the problem list up to date, while others have a problem list of historical diagnoses and an active problem list for current diagnoses. Not only do coding professionals need to identify a documented condition, but they also have to consider if there is enough evidence to make it a reportable condition by meeting the above criteria. To make it even more challenging for the coding professional, providers often document statements like “monitor for acute blood loss anemia” or “meets sepsis criteria will start sepsis protocol.” On the surface, it may appear that criteria to report the condition is met; however, this is routine care, so the argument can be made that these are not reportable conditions.

Deciding whether a condition is reportable is more complicated when considering the concept of clinical validation and the acceptability of the clinical criteria used to make a diagnosis. Guideline I.A.19 of the ICD-10-CM Official Guidelines for Coding and Reporting states, “The assignment of a diagnosis code is based on the provider’s diagnostic statement that the condition exists. The provider’s statement that the patient has a particular condition is sufficient. Code assignment is not based on clinical criteria used by the provider to establish the diagnosis.”

At first glance, this guideline supports the overall coding based on provider documentation within the health record. However, coding professionals must ensure that the documentation meets the UHDDS definitions of “Principal Diagnosis” and “Other Diagnoses,” which is included in the ICD-10-CM official guidelines. If a condition does not meet these definitions and is not reportable, CDI and coding professionals should make every effort to query the provider and ask for further clarification. Not every condition documented in the health record will meet criteria to be a reportable condition. For example, a diagnosis that is only documented on a problem list is not likely reportable if there is an absence of supportive documentation that demonstrates it was monitored, evaluated, treated, increased nursing care, or increased the length of stay.

Foundation of Documentation

With the evolution of technology, the foundation of documentation has significantly changed over the years. Many EHRs have opened a new realm of concern regarding documentation integrity due to the inappropriate use of copy and paste, copy forward, and/or documentation templates that do not follow the traditional structure of the problem-oriented medical record (POMR).

Historically, many healthcare providers were trained to document in the health record following a method called the SOAP note. The SOAP acronym stands for subjective, objective, assessment, and plan. This method originated from the POMR where the intent is to provide a basic structure for providers to follow when documenting in the health record. The goal is to allow all providers to read a patient’s health record and clearly understand the patient’s problem and how the patient is being treated.

There are four total components within a SOAP note and, within each component, there are specific focus areas and information that a provider is expected to document. Within the subjective component, providers typically document the chief complaint (CC), history of present illness (HPI), social/medical/surgical history, and a review of systems (ROS). As for the objective component, this includes the provider’s physical examination findings. The assessment component is where the provider documents their diagnoses/findings that warranted the patient’s visit. This many include differential and/or definitive diagnoses. Finally, the plan component consists of the provider’s plan to address/treat the patient’s diagnosis(es). To many providers, this is known as the medical decision-making section. Typically, a plan is developed for each problem/assessment. The lack of standard notes is problematic for technologies dependent on NLP because it is difficult for the tools to learn what to ignore within a document compared to what is relevant, actionable documentation.

Additionally, more and more organizations use the health record as an education tool for the provider by embedding prompts and reminders as well as a record of the education that was provided to the patient. It is often difficult for NLP technology to identify the context of documentation. Some technologies can identify a diagnosis as a historical mention or when it is negated, but it is often much more difficult to read a template prompt like “sepsis criteria met” followed by a response of yes or no. CDI and coding professionals must be vigilant when working with any technology to validate the context of highlighted diagnoses to ensure it is a valid, reportable condition.

As the health record becomes more complex and bloated, it is helpful to remember the seven common criteria that help define quality documentation:

  • Legibility
  • Completeness
  • Clarity
  • Consistency
  • Precision
  • Reliability
  • Timeliness

As referenced in the AHIMA Practice Brief titled “Guidelines for Achieving a Compliant Query Practice (2019 Update),” coding and CDI professionals utilize these criteria as a basis for CDI education and querying. “But it is important to note the overall accuracy of the health record and how well it meets industry and regulatory standards, it is outside the scope of querying professionals to manage provider documentation practices.”9

Without documentation consistency, completeness, clarity, and reliability in a timely manner, a patient’s severity of illness will not be reported accurately, which will result in both immediate and long-term implications. Immediate consequences will be the quality of patient care since the health record is used as a communication tool among all healthcare providers. Long-term implications may involve the patient’s overall risk scores. All risk adjustment models are dependent on the accuracy of the reported data. Health record documentation must support the patient’s diagnoses. Provider documentation must include an assessment and treatment plan, along with supporting clinical indicators. Quality documentation will then be translated to accurate reportable diagnosis codes, and, in return, this will impact quality reporting and/or prediction of costs for the patient. This data will be utilized by consumers, payers, and providers.

Best Practices

With technology evolving, it is important for all users to understand functionalities within each system and to establish best practices without taking short cuts. It is important for hospitals and providers to have experts at the table when purchasing these new technologies. Vendors may present and market a system to solve coding and/or CDI issues related to documentation, but that may create other obstacles that will require mitigation. Some technology utilizes natural language understanding (NLU) or NLP technology to “identify” diagnoses without looking for clinical indicators that are required to support the reporting of such diagnoses as defined in the ICD-10-CM Official Guidelines for Coding and Reporting. Additionally, it is often difficult for technologies to reflect the nuances associated with many coding guidelines, like the coding of uncertain conditions in the inpatient setting.

CDI and other HI professionals should be at the table during these technology discussions because they are the subject matter experts who understand the importance of documentation and the complexity of the many reimbursement methodologies and their documentation requirements. It is problematic when vendors market a tool to non-subject matter experts because the use of NLU and NLP is not perfect. This type of technology is still a work in progress. For example, if a provider is utilizing a tool that helps identify a diagnosis opportunity by leveraging NLU/NLP technology to analyze previously documented conditions, there is a risk that the provider, who is not aware of the coding guidelines, reporting a historical condition may erroneously be reported as an active condition (e.g., current malignancy under active treatment versus a past history of malignancy being treated).

In addition, there are times when NLU/NLP is driven by medication and other documentation in the health record that may not be appropriate for all patients. For instance, there are medications that may treat multiple conditions; therefore, the use of Lasix does not equate a congestive heart failure diagnosis for every patient. It is best practice for provider documentation to demonstrate some kind of monitoring and/or treatment for every condition treated. Simply documenting the phrase “being monitored” is not enough to meet the definition of a clinically significant condition. Documentation should support how the condition was clinically evaluated, treated, and/or resulted in increased nursing care and increased monitoring. For example, if a patient has acute blood loss anemia, a provider may want to elaborate how this condition is increasing monitoring by documenting “q4/q6 monitoring of H/H” rather than documenting the condition and simply the phrase “will monitor.” This level of specificity will help illustrate and/or justify the coding of this clinically significant condition due to the increased level of monitoring (e.g., q4/q6 monitoring) and resources used.

Another area of opportunity involves the creation and monitoring of templates within the EHR. Providers should be educated on the implications that come with the inappropriate use of the copy/paste functionality and not updating clinical documentation accordingly. The use of the EHR has created issues like note bloat, where unnecessary/inappropriate documentation is being pulled into a note. Accuracy of the health record will ensure the patient is receiving the proper treatment and the appropriate diagnoses and procedures are reported. Organizations can benefit from creating a committee that monitors and tracks the use of the templates to provide tailored education to ensure documentation integrity. This committee can also take on the task of monitoring the “problem list.” Unfortunately, the problem list has become problematic for many organizations. The problem list can include resolved patient conditions, new conditions, and chronic conditions that are currently being treated. Often, the problem lists are not updated with each patient encounter, so it may not be accurate/up to date; therefore, it should not be a source for coding. Many providers, especially specialists, do not feel comfortable updating the problem list that is outside of their focus areas. In addition, the problem list may not be reviewed by the provider but by the person (e.g., LPN, RN, etc.) performing the intake and/or initial review of medications and/or conditions during an office visit. Each organization should create policies that address issues like who owns the problem list, who is responsible for updating the problem list, and how to resolve conflicting documentation (e.g., Type I diabetes documented in the problem list but Type II diabetes is documented by the endocrinologist under assessment and plan, etc.).

Due to the increased use of technology by providers, especially in the outpatient setting, many providers are now submitting their own coding, which increases the risk of submitting inaccurate data. This lack of oversight over provider coding is further complicated at many organizations that may have recently implemented the use of tools that identify documentation opportunities through the use of claims data and keywords from previous documentations. Many providers are left to process these recommendations on their own and may inadvertently document and/or submit the wrong codes for billing purposes due to these “nudges.” This can result in an increased reporting of acute conditions (e.g., sepsis, cerebrovascular accident, malignancy, etc.) in the outpatient setting. This is due to insufficient training and understanding of ICD-10-CM coding guidelines. A CDI/coding professional reviewing these “nudges” would have pointed out to the provider that these conditions “flagged” as documentation opportunities are no longer active conditions and the patient is not actively being treated for this historical condition; therefore, it does not qualify as a reportable condition. The CDI/coding professional would then educate the provider on how to appropriately document historical conditions as resolved, whereas technology can only do so much before the provider becomes overwhelmed by too many “nudges” to consider.

Achieve Documentation Integrity

Technology is here to stay, and the use of NLU and NLP are already being incorporated and/or replaced with newer technologies like machine learning. Organizations would be well served to have a dedicated committee that evaluates the impact of the EHR and technologies that supplement mid-revenue functions on a regular basis to ensure the tools are functioning as desired and adding value rather than complicating existing CDI and coding processes. The accuracy of clinical documentation will always warrant some critical thinking, and the use of technology can only achieve so much and warrants some human oversight since medicine is never the same for each patient. Technology is supposed to help providers with documentation by improving efficiency, not complicating it by providing too much noise with recommendations that are not supported. Therefore, going back to basics by remembering the intent of the health record and the four components within a SOAP note may help providers achieve documentation integrity while leveraging technology. By achieving documentation integrity, the following improvements can be achieved by all: quality clinical care, accurate data and reimbursement, and quality scores.

Notes
  1. Centers for Medicare and Medicaid Services. “Promoting Interoperability Programs.” 2021. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms.
  2. Centers for Medicare and Medicaid Services (2010). Medicare and Medicaid EHR Incentive Program. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/downloads/MU_Stage1_ReqOverview.pdf
  3. Agency for Healthcare Research and Quality (AHRQ), Practice Facilitation Handbook. Module 17: Electronic Health Records and Meaningful Use. https://www.ahrq.gov/ncepcr/tools/pf-handbook/mod17.html
  4. Krauss, John C.; Boonstra, Phillip S.; Vantsevich, Anna V; and Friedman, Charles P (2016). “Is the Problem List in the Eye of the Beholder? An Exploration of Consistency Across Physicians.” Journal of Informatics in Health and Biomedicine. https://academic.oup.com/jamia/article/23/5/859/2379895
  5. Holmes, Casey (2011). “The Problem List Beyond Meaningful Use, Part 1: The Problems with Problem Lists. Journal of AHIMA. February 11. https://journal.ahima.org/wp-content/uploads/JAHIMA-problemlists.pdf
  6. Ibid.
  7. Ibid.
  8. ICD-10-CM Official Guidelines for Coding and Reporting FY 2021
  9. AHIMA/ACDIS Guidelines for Achieving a Compliant Query Practice (2019 Update)

 

Cheryl Ericson (cherylericson@comcast.net) is a CDI subject matter expert with a body of work that includes many speaking engagements and publications for a variety of industry associations. 

Rachel L. Pratt (rachel.pratt@hsc.utah.edu) is an inpatient coding supervisor at University of Utah Health.

Anny Pang Yuen (anny.yuen@apconsultingassociates.com) is a principal/independent consultant at AP Consulting Associates LLC.