The Problem List beyond Meaningful Use

The Problem List beyond Meaningful Use
. download article
Part I: The Problems with Problem Lists

In January the federal government launched its meaningful use program, based around a set of standards designed to ensure that healthcare providers adopt electronic health records (EHRs) that can produce better health outcomes. One criterion focuses specifically on the problem list and requires eligible providers and hospitals place all patients on a common dictionary through coding. A common dictionary will help facilitate future decision support tools and prepare the problem list for upcoming health information exchange.

While a coded platform will be a step in the right direction, it unfortunately will not be enough to create problem lists that fully support the needs of modern medicine. Currently, the content and use of today’s problem lists varies widely from practitioner to practitioner, and this diversity can compromise patient care. The future of the problem list needs to move beyond coding to standardization of content and utilization.

Why Is the Problem List Important?

The problem list was originally created by Lawrence Weed in the 1960s as part of his recommendation for a problem-oriented medical record. A simple idea, the problem list soon became a commonly accepted part of the medical record and is used in most EHRs today.

At a high level the problem list states the most important health problems facing a patient. While the basic structure of the problem list varies widely by healthcare organization, at its core, the problem list includes a patient’s nontransitive diseases.

The problem list offers four major benefits to patient care. In the office, the problem list helps practitioners identify the most important health factors for each patient, allowing for customized care. Beyond the patient visit, the problem list can be used to identify disease-specific populations. It is easy to run data analysis and find all patients with a common illness through coded problems in an EHR. This application can be particularly useful for quality improvement programs. For instance, health centers conducting quality improvement efforts can rely on problem lists to identify their disease-specific patient populations, provide follow-up care, and ensure all patients are receiving care that meets best practices in treatment.

The problem list also can be the basis for determining standard measures or report cards in healthcare for both individual practitioners and healthcare institutions. Practitioners and healthcare organizations are often judged by treatment statistics that involve a certain percentage of patients receiving recommended tests and treatments. The problem list can provide the denominator for these statistics. Finally, the problem list can be used to identify patients for potential research studies.

Unfortunately, the exclusion of a diagnosis from the problem list comes at the expense of the patient. If Dr. Smith forgets to add asthma to Sally’s problem list, the nurse practitioner may not identify Sally as a higher risk patient when she comes in with a cough or fever. The quality improvement effort at the healthcare center then passes over Sally, and she never is reminded to come in for an annual check-up with her pulmonologist. When evaluating the center’s quality of care, Sally’s inadequate treatment is not included, leading to a missed opportunity for the organization to identify an area in need of improvement. Sally also misses out on a new research study that offered free medications because she was never identified as a potential candidate. Although a tiny part of the landscape in the medical record, the problem list can play a significant role in patient care.

What Should Be on the Problem List?

If asked to define the problem list, practitioners would likely give similar, but not identical, responses. For instance, practitioners at a Boston-area health center said:

  • “The problem list is for nontransitive illnesses.”
  • “A problem is anything ongoing or active that I’m working on with the patient.”
  • “The problem list is a place to have a summary of the most important things about a patient.”

While these definitions show a common ground, each contains its own scope of what problems should be included or excluded.

The first quote points to a more conservative version of the problem list that encompasses only past and existing diagnoses. This is the official definition used in the federal meaningful use program. As such, the conservative problem list likely will become the most prominent version nationally.

In comparison, the second and third statements indicate a much broader view of problem lists that includes expanded categories such as undiagnosed symptoms, hospitalizations, surgeries, and social and family histories.

Both the comprehensive and expanded versions have their respective pros and cons. The argument against the expanded problem list is lengthiness, which makes finding the most important facts quickly difficult. On the other hand, the expanded problem list allows practitioners greater leeway to include personalized content for each patient. For example, if a practitioner sees a patient with a significant fear of doctors, that practitioner may choose to place “afraid of doctors” on that patient’s problem list to ensure that if the patient is seen by another practitioner at that healthcare organization, the clinician will be alerted to the issue and act with extreme sensitivity. While “afraid of doctors” is not an ICD-9-CM–coded problem, in this scenario it was the most important fact about that patient for providing high-quality care.

Currently the scope of problem lists is largely determined by the structure of a healthcare center’s EHR and the judgment of its practitioners. With trade-offs in patient care for both small and large scopes of the problem list, this is one area where practitioners will strongly disagree.

What Are Worthy Problems?

Beyond the broad categorical determinants, another major point of debate concerns what diagnosed illnesses are worthy of the problem list. Currently the decision of which problems are included or excluded remains largely the determination of practitioners. While one practitioner may argue that chicken pox is a relevant problem for assessing risk for shingles and the need for a chicken pox vaccination, another practitioner can debate that its inclusion adds little value and clutters the list.

The inclusion of an illness on the problem list likely will vary by patient as well. Exercise-induced asthma will be important information about a patient on several asthma medications, but it may not be important if the patient is not seeking treatment, takes no related prescriptions, and is not affected by the illness in his or her daily life. Long-term undiagnosed symptoms also fall within this difficult category. A patient may complain of a cough for years but have no clear diagnosis. Under a conservative problem list structure, the physician would not add “persistent cough” because it is not a nontransitive illness. Yet, if that patient is admitted to the emergency room, such information could be a key clue for determining treatment.

Due to the complexity of deciding which health concerns should and should not be included, most healthcare organizations have left these decisions to their practitioners. As a result, in a shared record system practitioners often run across many different styles of problem lists, some of which differ greatly from Lawrence Weed’s original vision. For example, misuses of the problem list include documenting patient treatments or tests, such as the date of the patient’s last abnormal Pap smear.

While ideally every possible clue to a patient’s health could be noted on a problem list, comprehensibility quickly becomes an issue particularly as a patient gets sicker. For relatively healthy patients, problem lists limited to nontransitive illnesses are typically less than five items. For unhealthy patients with an expanded version of the problem list, the document can grow to 30 or more lines of text, making a clear and quick understanding of the patient’s health nearly impossible. Completeness versus length is currently decided by the personal preferences of practitioners and will be one of the hardest compromises to find in any standardized problem list.

Managing Sensitive Information

Another debate surrounding problem list content is inclusion of information on highly sensitive issues that may not be need-to-know for every healthcare professional. Healthcare organizations that include a behavioral health division, for example, must determine how much behavioral health information should be shared across the entire organization. Some organizations will restrict the psychiatrist’s notes to the behavioral health department but still list all prescription drugs and patient visits in the common pool of information within the EHR. This method typically gives enough information to a primary care physician or emergency room practitioner to indicate that underlying behavioral health issues exist, but without going into specifics.

At organizations without an official policy, the decision is left to the personal judgment of the practitioner. As the problem list is rarely filtered, the information can be viewed by most departments within the organization that have EHR access. While a specific diagnosis may be helpful in an urgent care situation, it may not be need-to-know information for the patient’s allergist. Organizations must carefully consider state and federal patient privacy requirements. Failure to incorporate patient privacy rules into the design of the problem list may cause inadvertent privacy breaches. Therefore, healthcare organizations must clearly define what problems should be included or excluded on a problem list in order to maintain appropriate confidentiality of patient data.

Specificity of Diagnosis

While coding may give the problem list a common dictionary, EHRs are often designed to allow organizations to map a code to their own terminologies. Therefore, organizations have a choice in the level of precision they want to use for the terms on their problem lists. This decision will ultimately affect the efficacy of the problem list for relaying significant amounts of information quickly.

For example, a patient diagnosed with type II diabetes could be given the ICD-9-CM diagnosis code 250.00 in the patient encounter. If this diagnosis code is then promoted to the problem list, the hospital could program the EHR to list code 250.00 as “DM,” “Diabetes,” “Diabetes Type II”, or “Diabetes Mellitus Type II” on the actual problem list. While using the more detailed description of the disease is most precise and relays the greatest amount of information, the full description of the disease can also clutter the list and may not actually be any more useful to the practitioner than the acronym.

This debate over precision is further exacerbated by the variety of needs of different practitioners. The descriptions and terms a specialist may prefer are not necessarily easily understood by the rest of the medical community. Further, as medical records become accessible to patients through online portals, healthcare organizations will need to consider how to make problem lists both comprehensible to the patient while maintaining their usefulness to the practitioners.

Finally, the use of ICD-9-CM billing codes as the backbone to problem list coding comes with its own set of issues that could potentially dilute the accuracy of problem lists. Often problems are promoted to the problem list via billing diagnosis codes selected during a patient visit for insurance purposes. These diagnosis codes often do not reflect clinical information as it is most relevant to providers. Common mistakes include under coding, where practitioners select a diagnosis that is less precise than actually assessed; over coding, where practitioners select a diagnosis that is more precise than assessed; and coding the symptom instead of the disease.

These types of mistakes can lead to a cluttered problem list. Just as practitioners differ in the specificity of codes they choose, they also have the option to promote different codes to the problem list that reference the same disease. Under such a situation, the problem list becomes redundant and consequently less useful to practitioners.

The continuation of incorrect coding practices ultimately undermines the accuracy of the problem list. This is a significant complication to consider in addressing the precision of language in the problem list.

Case in Point: Recommendations for Achieving a Coded Problem List

While moving to a standardized problem list needs to be considered, first healthcare organizations will need to meet meaningful use criteria around problem lists. Participants in the program are required to maintain an up-to-date problem list of current and active diagnoses based on ICD-9-CM or SNOMED CT codes. To comply, at least 80 percent of all unique patients seen by eligible providers must have at least one entry (or an indication of none) recorded as structured data.

While many providers have adopted EHRs that feature coded problem lists, those who rely on free text face a relatively large challenge in shifting their practitioners’ behavior to a structured format. In the summer of 2010 a Boston-area health center that primarily engages in outpatient care conducted a study to determine how to best increase its use of coded problems. Following are the main recommendations the organization identified:

  • Create a “healthy” code: The health center sees a significant number of patients with no listings in their problem lists. It is difficult to tell whether “no problems” means a patient is healthy or that the problem list is incomplete. Creation of a healthy code will help clarify the record and boost coding of problem lists over time.
  • Make the selection of coded problems more robust: Analyze the most common free-text terms and make sure they are available as codes. The health center found GERD, osteopenia, and osteoporosis were missing from the dropdown menu of coded terms for the problem list. Further, the center deleted unused terms from the menu to keep the length of the menu manageable.
  • Automatically translate uncoded problems into their equivalent coded problems: Another option is to translate free-text terms such as GERD into their coded counterpart. Concerns exist over proper translations, particularly of practitioners’ personal abbreviations.
  • Conduct problem list training focused on coding: During interviews with practitioners, it was found that some did not know that a “Promote to Problem List” button existed on the diagnosis page within the encounter note. Training even on the simple functionalities, such as adding or deleting problems to the problem list, could make a big impact on the completeness of both the problem list and the use of coding.
  • Decide on standards for content and utilization: Clinical leadership must decide on the standards for how the problem list should be used. This includes the involvement of the medical records committee to determine questions such as who is responsible for the problem list and what it should include. Through standardization, the impediments to coding can be further addressed, and the use of problem lists can be promoted.
  • Create a warning when free text is entered: If a practitioner does use free text in the problem list, an alert reminding the practitioner to use a proper code or, even better, a warning that detects the coded equivalent to the problem, would help eliminate unnecessary free text.
  • Integrate SNOMED coding language for use in problem lists: Currently, the health center uses ICD-9-CM in its diagnosis search. Practitioners noted that many of the terms are not intuitive and thus are difficult to find. Some systems are capable of searching through both ICD and SNOMED dictionaries and then, based on the selected diagnosis, map to the organization’s preferred coding terminology. Dual dictionary use could help increase coding the problem list, as problems are often generated from the diagnosis during the patient encounter. Such a system would ease practitioners’ search for the right descriptor for the problem.
  • Integrate search option by vernacular language: Adding search boxes that map vernacular terms to their appropriately coded problem counterpart would ease the use of coded terms in the problem list.
  • Move the option for free text to a “less convenient” location: At the health center, the option for free text on the problem list is front and center. Moving the button to a less convenient location within the problem list page will help practitioners consider using the coded terms before taking the easier free-text route.
  • Allow every diagnosis to be “promotable” to the problem list: Currently at the health center, only a selection of problems are promotable to the problem list from the diagnosis page. Without organizational defined limits on what should or should not be on the problem list, practitioners should have the option to promote (and consequently code) any diagnosis to the problem list.
  • Create feedback mechanisms: Create a system that allows practitioners to know their use of coding in the problem list and their consequential goal for coding. This mechanism can be layered on top of feedback about overall utilization and accuracy of problem list content.
  • Gain support of clinical leadership: No massive changes will occur among practitioners without the support of the clinical leadership. Gaining support will be key in promoting greater usage and coding of problem lists. Following on the analysis of the current problem list usage data, the health center’s next steps were to present the findings to the medical records committee and gain support for implementing some recommendations or requirements for practitioners to code problems.
No Clear Solution

Many areas of disagreement will arise in any conference to create standards for the problem list. There is no perfect answer for the question, “What is a problem list?” What could save a life in the emergency room might embarrass a patient in the primary care provider’s office. While the problem list appears to be a simple administrative document, it is full of complexities, and the diversity in opinions over content is why healthcare organizations are so tentative about addressing the problem list in anything but broad strokes.

Part 2: Fixing the Problem List

The problem list, that cheat sheet to patient health, originated with Lawrence Weed’s problem-oriented medical record in the late 1960s and has since become a standard component of patient medical records. Part 1 of this article (February 2011) reviewed the benefits of problem lists as well as areas of controversy over what should and should not be included on them.

The controversies described in part 1, such as whether to include sensitive problems or undiagnosed long-term symptoms, are certainly nothing new to medicine. In general, the variation in the problem list across practitioners is a readily accepted part of practicing medicine. However, the emergence of electronic health record systems, being accelerated by federal incentives that encourage their adoption, makes this the ideal time to forge consensus on problem list content.

Leveraging an accurate, up-to-date, and consistent problem list within the EHR will allow organizations to identify their diseased patient populations with increased accuracy, which has major implications for the success of decision support and population management tools.

Yet, achieving consistency cannot occur via policy alone. With the benefits of computers, new opportunities exist in applications and online portals to help the healthcare community create consistent and accurate problem lists across their entire patient base.

Why Standardize?

The acceptance of variations in the problem list is a legacy of the paper medical record system. Healthcare organizations that use paper records have almost no incentive to standardize the problem list because it would require significant amounts of resources to review and modify records by hand, a vast amount of work for the perceived benefit.

However, EHRs present new means to solve these long-running issues in a practical, cost-effective manner. Up to now the problem list appears to be largely overlooked in the medical record’s digital transition. It is largely used no differently in digital form than it has been on paper.

The federal EHR incentive effort recognizes the importance of problem lists to patient care. Professionals and hospitals participating in the meaningful use program must code problem lists as part of meeting stage 1 requirements. Specifically, they must maintain up-to-date problem lists of current and active diagnoses based on ICD-9-CM or SNOMED CT, clinical coding standards designed to classify diseases, symptoms, and other relevant factors about a patient. At least 80 percent of all unique patients must have at least one entry or an indication of none recorded as structured data.

While pushing problem lists to a common dictionary will be a useful step toward developing uniform problem lists, the variation in content and utilization is still a hindrance to fully realizing their potential in the digital health age. At a foundational level, practitioners will need uniform problem lists to provide consistent care across patients. Further, the market driver behind any standardization initiative will be the need for problem lists to provide clinical decision support and population management tools with a precise method for identifying diseased patient populations.

The Inadequacies of Proxy Methods

Currently healthcare organizations are using proxy methods such as medication lists and billing codes to identify their diseased patient populations. Both of these methods, however, come with a high amount of false positives and negatives. For instance, the use of medication lists to identify target populations is highly inaccurate because a diseased patient may not be taking medications or may be receiving treatment elsewhere. The medications thus may not be listed in the EHR.

A nondiseased patient also may be assigned a particular medication for a very different problem. For instance, if a healthcare organization attempts to identify all their asthmatic patients by looking at who was prescribed an inhaler, they are likely including a group of patients who had persistent coughs, not asthma.

Another common proxy method is the use of diagnoses from billing codes. This method also contains great uncertainty, as billing codes often do not precisely reflect clinical information as it is most relevant to providers. In addition, a single misdiagnosis on one visit could throw off all further reporting. This situation can be a particular problem if providers miscode a diagnosis they are considering that then gets ruled out by later diagnostic studies. A query for a diseased patient population based on that initial billing code would then treat this patient as diseased.

Dave deBronkart, or “E-Patient Dave,” a blogger on the participatory medicine movement, exemplified the reality of trying to use billing data as clinical information when he attempted to move his health files from a hospital onto Google Health in early 2009. The result was a smattering of erroneous information because the hospital was transferring billing data, not clinical data. With these issues, using billing data to identify a diseased population of patients will come with a high amount of error.

If EHRs continue to rely upon these proxy methods, they will hinder the acceptance of decision support and population management tools. For example, if a healthcare center installs a simple decision support tool to remind its practitioners to give asthmatic patients a flu shot, using one of these proxy methods will miss a certain segment of the at-risk population. Further, that decision support tool will create pop-ups on irrelevant cases, annoying the doctors and making them less likely to pay attention to future reminders.

For decision support tools to be most effective, they must be extremely accurate—providing the right advice in the right scenario at the right time. As problems represent a true declaration of a patient’s health, the problem list presents the best opportunity within a patient record from which to gain the most precise information for decision support and population management tools. Yet, the problem list cannot be helpful until the current variation in content and use is addressed.

Addressing the Inaccuracies

Even if a healthcare organization creates policies around the content of problem lists, achieving uniformity ultimately will require changing provider behavior. It would be difficult for practitioners to comply completely with any standardizing policies, because they are not easy or intuitive requests.

In general, practitioners find that entering standardized data rather than free text typically takes more effort (e.g., more clicks) and often does not express the data in a way that best matches their thought processes. With the problem list, policies to standardize will likely meet the same issues. Practitioners have developed their own problem list styles, and any policy inherently cannot meet the preference of all practitioners at all times. Therefore, even with good intentions, practitioners’ personal preferences would quickly win over organizational standards in day-to-day practice.

Further, natural human error keeps problem lists from achieving full accuracy. The most common error today is simply forgetting to add conditions as they are diagnosed. Other less frequent errors include using incorrect terms to describe a problem or placing a condition on a patient’s problem list that never occurred.

Fortunately, the EHR provides new solutions for these very old problems. To achieve uniformity, the healthcare industry must create systems and tools that encourage consistency and completeness in the problem list as well as policies to address disagreements in utilization.

Clarifying Responsibilities

In a shared medical record system, the issue of who is responsible for maintaining problem lists can be contentious. Many primary care providers (PCPs) believe that both specialists and PCPs should add problems to the list. Conversely, many specialists have suggested that the problem list is solely the PCP’s responsibility and feel it would be intrusive to add their own problems.

In this controversy, AHIMA recommends that accountability for maintaining accurate problem lists be assigned to the PCP. However, if a medical record is shared, mechanisms allowing specialists to provide recommendations for problem list additions would be preferred. While this is happening through informal communication between PCPs and specialists, the process is not a medical standard and the multiple steps to actually placing a problem on the problem list perpetuates inaccuracies.

To reduce the potential for error, organizations should implement policies clearly delineating the responsibilities of both PCPs and specialists. They also need to create methods through which clear communication can occur. In the case of recommendations from a specialist to a PCP, an EHR application that supports such a process (e.g., a prompt within the specialist’s encounter note to supply a suggestion to the PCP) would streamline this process, ease the responsibility question, and increase accuracy.

Promoting All Problems

Even practitioners who pay excellent attention to the problem list are prone to mistakes, such as forgetting to add a problem. The persistence of human error is another area where the digital problem list can surpass its paper counterpart. Decision support tools that increase the completeness of problem lists can help avoid simple mistakes.

One such tool under development at Brigham and Women’s Hospital is the Maintaining Accurate Problem Lists Electronically project. MAPLE is an EHR application that alerts physicians to potential problem list gaps during the documentation process based on the diagnoses, vitals, medications, and tests entered in the encounter note. MAPLE is currently under a nonblinded cluster randomized clinical trial, according to principal investigator Adam Wright.

Stéphane Meystre and Peter Haug at the University of Utah also worked to address the inconsistencies in problem lists by studying the use of natural language processing (NLP) to draw out potential medical problems from free-text medical documents within an EHR. Their study, published in 2006, reported achieving high, but not perfect, rates of recall and precision for identifying a set of 80 medical problems. Further development of tools like MAPLE and NLP likely will be the key to reducing human errors in problem lists.

Of course, not all providers will welcome computer involvement in clinical documentation. As with the debate over documentation templates for patient encounters, some practitioners argue strongly against the computer guiding the practitioner in the decision-making process. This is an important debate that requires more testing and experience to properly weigh the costs and benefits.

A more immediate and addressable concern when considering these tools is that such applications can be the impetus for tremendous clinical documentation errors.

On paper, documentation errors remain isolated to that particular patient encounter. With EHRs a glitch in a program, misinterpretation of information, or disregard of instructions can lead to rampant error in medical documentation that if continued unchecked could pose a risk to patient care. Thus any EHR system that suggests diagnoses to providers for the problem list must be monitored for accuracy.

Patient Review

The next defense against inaccuracies in the problem list is regular review. PCPs typically review problem lists during physicals. Yet, the high portion of the population without a dedicated PCP combined with many people not receiving annual physicals makes this review process unreliable for creating up-to-date problem lists across the entire patient population. Patient review of problem lists can help increase accuracy. But allowing patients to review their own problem lists is controversial among providers.

In particular, some providers are concerned that patients may not understand the medical jargon and react badly to diagnoses they perceive as insulting, such as obesity or alcohol abuse. This situation could strain the patient-doctor relationship.

Yet, while these concerns are valid, the emergence of online patient portals significantly eases the process of a patient reviewing a problem list for errors. For instance, portals allow patients to review their information in their home, not the doctor’s office; patients will have more time to look up disease definitions and other information.

Further, portals can be designed for the patient. Portals can be programmed to show definitions when the patient scrolls over or clicks on a problem, or they can include language translation tools to aid non-native speakers.

Problem lists have been accessible to patients online at medical centers such as Beth Israel Deaconess Medical Center for some time without serious issue in regards to patient-practitioner relations. While some practitioners remain concerned that a shareable problem list will lead to controversy that adds to their already stretched appointment times, that same controversy can serve as the impetus for productive conversations between practitioners and patients about the patient’s condition.

Right now, Tom Delbanco, MD, at Beth Israel Deaconess Medical Center is conducting Open Notes, the largest study ever undertaken on the effects of patients viewing their full medical records via online portals. The results will be very informative about the adoption of patient-viewed portals and consequently the online review of the problem list.

A Perfect Tool for the EHR

The healthcare industry is undergoing a digital revolution. The result will likely be vast changes in how people interact with medical records.

The continuity in care movement, for example, is pushing practitioners to use the EHR as the main medium for communication with other providers. Administrators are relying on medical records to measure the success of quality improvement projects in real time. Patients are now getting a chance to view their medical records online, gaining a new understanding of their diagnosed health. Finally, patients may have a complete medical record across all practitioners through the possibility of health information exchange.

With all these changes, the medical record is under stress to serve the increasing demands of numerous stakeholders. A record that was formerly a PCP’s personal notes is now of interest to specialists, administrators, researchers, government officials, payers, patients, and the hospital next door. Further, every stakeholder brings to the table a new set of demands for information. As a consequence, tolerance for inaccurate, inconsistent, or ambiguous parts of the medical record is rapidly decreasing and the need for standardization across the medical record is knocking at the door. The problem list exemplifies this trend.

Lawrence Weed was truly a visionary who created a perfect tool for the EHR. Yet, the variations in today’s problem list make it unusable as a resource to further improve patient care. In order to reap the benefits of upcoming decision support and population management tools, as well as meet the larger trends in medicine, healthcare centers need to address the issues in content and utilization as well as develop the policies and tools to standardize the problem list.

Casey Holmes is a master’s student at the Harvard School of Public Health. Part 1 of this article appeared in the February 2011 print issue. Part 2 appeared in the March print issue.


  1. Can i code from a problem list and what is a non-codeble list

  2. Referring to the Bates Guide or Hutchison’s Clinical Methods that has been the Gold standard of History Taking and Diagnosis for Physicians in training,clearly defines Problems list as the working list of problems to be addressed ‘now’ and/or needing future attention or observation.
    If we are using ICD codes to define Problems List, I think we should use a different naming convention like ‘Diagnosis List’ instead of Problem List which willeliminate the use of unapproved diagnoses.
    Thank you for being the voice of the profession in such ambiguous times.

  3. Good article that brings together a lot of issued around the use of the Problem List. What has not been mentioned is that with computers and the right metadata the problem list can be made to look many different ways.

  4. Ms. Holmes astutely points out the problems of a missing diagnosis from the problem list, but there also exists problems with false (presumptive or “rule-out”) diagnoses that have been entered for the purpose of ordering tests. To improve overall accuracy of a problem list, sophisticated analytics are available today that can corroborate diagnoses against active medications and active lab values to identify an accurate and prioritized list of diagnoses. This level of analytics is already built into some EHRs. Additionally, the right analytics engine can also solve the problem of free text translation into ICD codes.

    While the need for standardization is a real and somewhat lofty expectation placed on EHRs as they implement problem lists, what may occur is that problem lists become standardized to the lowest common data entry and analytics methods available, losing out on the benefit of highly accurate lists for providers who seek a problem list that conveys more actionable health intelligence.

    Ahmed Ghouri, M.D.
    Chief Medical Officer, Anvita Health

Comments are closed.