Clinical Documentation Integrity Key Performance Indicators Practice Brief
Introduction
This practice brief is focused on the key performance indicators (KPIs) that can be evaluated to measure the performance of clinical documentation integrity (CDI). KPIs are measurements specific for an organization or department being evaluated. KPIs are critical to track progress toward an intended result. They provide a focus for strategic and operational improvement, create an analytical basis for decision-making, and help maintain attention on the most impactful information.
Managing the use of KPIs includes setting targets (the desired level of performance) and tracking progress against that target. Managing with KPIs often means working to improve leading indicators that will reap benefits. An organization uses KPIs based on predominant trends to set priorities regarding how CDI resources are deployed so the organization is better able to accomplish enterprise-wide strategic goals.
Leading and Lagging Indicators
Leading indicators are precursors of future success, and lagging indicators show how successful the organization was at achieving results in the past (after the fact). The difference between a leading indicator and a lagging indicator is the fact that a leading KPI indicates where a CDI department is working toward, whereas a lagging KPI measures only what has already been achieved.
Depending on the scope of the CDI efforts, leading indicators might include:
- Year-over-year variance analysis related to pay-for-performance penalties by measure and/or by service line (e.g., inpatient, pediatric, clinic, etc.). This information can help the CDI department identify new documentation opportunities that impact pay for performance.
- Telehealth utilization and virtual visits within an organization by payer, service, specialty, or practice size/location. CDI departments may begin to review telehealth documentation when this type of service is provided by the organization.
- Ratio of population (patient) risk adjustment factor (RAF) score to expenditures by service area by payer. CDI departments may want to begin reviewing for documentation opportunities that impact the RAF score. For example, consider looking at RAF score changes from year to year following implementation of, or changes to, a CDI program.
Lagging indicators are concrete numbers, which makes them very useful, and the methods used to calculate them are widely understood. These may include metrics such as the case mix index (CMI), review rate, query rate, response rate, and agreement rate. They allow an organization to benchmark itself against others or make year-over-year internal comparisons. Combining leading and lagging indicators can help provide a comprehensive and accurate measurement of success.
External and Internal KPIs
CDI departments may want to monitor both internal and external KPIs to get a comprehensive overview of their performance. External KPIs is monitoring information that is external to the organization, and internal are metrics that occur within the CDI department. CDI leaders will need to analyze both sources of information to get a clear picture of the department’s outcomes.
External KPIs
There are many external resources that can be leveraged to measure the success of a CDI department. One of the primary tools for revenue cycle professionals is MedPAR data, which is a collection of de-identified Medicare claims data. Specifically, MedPAR consolidates inpatient hospital claims data from the National Claims History files into admission level records, so it contains 100 percent of Medicare beneficiaries using hospital inpatient services. MedPAR data is used as a basis of comparison and benchmarking among hospitals. A common tactic is for consulting companies to compare a hospitals’ performance against the 80th percentile threshold as a target to measure potential opportunity; however, due to patient populations, not all hospitals can achieve the 80th percentile, so it may be an unrealistic goal.
The Program for Evaluating Payment Patterns Electronic Report (PEPPER) provides hospital-specific Medicare claims data for target areas identified by Recovery Audit Contractors (RACs) and Medicare Administrative Contractors (MACs) that are prone to improper payment. “PEPPER does not identify payment errors; rather, it should be used as a guide for auditing and monitoring coding and billing to help providers identify and prevent payment errors.” 2 Each hospital’s ratio is compared to other hospitals at the state, MAC jurisdiction, and national levels resulting in a ranking by volume percentage.
PEPPER data uses the high outlier threshold of the 80th percentile and a low outlier threshold of the 20th percentile. If the percentage of paid Medicare claims for the specific target area ranks at the 80th percentile or above, the organization is considered a high outlier for that target area.3 In other words, the percentage range for a particular target area may be from 20 percent to 75 percent. The 80th percentile may result in all those hospitals with a target area ratio of 68 percent or higher.
Best practice is to investigate why an organization is an outlier by sampling claims and reviewing documentation to validate the hospital’s rating for that particular target area.3 Determine if it makes sense for the hospital to be among the top 20 percent of all hospitals for that particular target area.
Internal metrics
Internal metrics will vary depending on the CDI department’s mission and focus area(s). As the department matures and the focus expands, metrics should be evaluated and updated to include emerging standards (leading indicators) that support the new areas of opportunity. With this maturity, there can be an evolution toward outcomes, in addition to process metrics, creating a balance between quantity and quality indicators.
When identifying which metrics to use, consider evaluating them from an organizational leadership perspective to identify areas of education, process improvement opportunities, and measure the department performance and outcomes. Education opportunities may include CDI, health information management (HIM)/coding, quality, clinical, revenue cycle, and compliance departments, as well as medical staff and senior leadership. The delivery of education should be offered based on the needs of the audience (e.g., on-site, remote, hybrid, written content, etc.).
Internal metrics are designed to analyze the CDI department’s outcomes and performance improvement opportunities. Further assessment of daily performance measures may help identify any potential outliers. Productivity metrics may vary by organizations, departments, and focus areas within the CDI department. Some factors that may impact these metrics may include, but are not limited to, the following:
- Reporting structure of the CDI department (e.g., HIM, utilization management (UM), finance, quality, compliance etc.)
- CDI mission (e.g., quality, diagnosis-related group (DRG) optimization, clinical validation, etc.)
- Different levels of experience in CDI and backgrounds
- Varying types of cases reviewed (e.g., complexity of cases, specialties, focus areas, etc.)
- Technology use and/or lack of technology
CDI departments should include KPI definitions in a policy and procedure to provide clarity to the data obtained. Streamlined definitions and policies increases the reliability of the CDI data. Some common internal metrics may include, but are not limited to, the following:
Review Rate
Definition: The calculation of the number of health records reviewed divided by the number assigned to a CDI professional and/or department per the designated time period. The metric may be used to support additional staff needed to meet the desired percentage of review. This rate may be assessed daily, weekly, monthly, quarterly, and/or annually depending on the needs of the stakeholders. This metric is oftentimes used to measure CDI productivity, but it is important for stakeholders to consider all the factors that may impact the review rate. These factors can include time spent delivering provider education, department meetings, competing responsibilities, as well as the level of patient acuity and complexity.
Best Practice: Organizations may utilize an industry standard benchmark; however, this approach may not be realistic to the needs of the individual department. As departments mature and responsibilities increase, determining a global industry standard is becoming more challenging. It is best practice to determine the benchmark review rate based on review objectives (e.g., quality measures, severity of illness (SOI), risk of mortality (ROM), observed versus expected (O/E) ratio, social determinants of health (SDOH), etc.), and daily responsibilities (e.g., provider education, rounding, department meetings, reconciliation of cases, etc.). These benchmarks should be measured yearly and adjusted based on new review objectives.
Benchmark: Some methods that could be used to calculate a realistic benchmark for the review rate include but are not limited to:
- Pilot study to determine average time to review, per specific location.
- Calculation of the average amount of time CDI professionals spend in a health record. This should include review time, query development, etc. This would not include time spent in meetings, providing education, etc.
- Follow-up assessment of review rate goals to determine if they are being met. If they are consistently not met, the benchmark goal may not be realistic.
Equation: Number of records reviewed/number of records assigned to be reviewed.
Example:
- A CDI professional reviewed 12 of the 15 health records assigned to them to review. The review rate for this professional is 80 percent.
- CDI team B reviewed 95 of the 100 health records assigned to be reviewed this month. The review rate for the CDI team this month is 95 percent.
Response Rate
Definition: The percentage of provider responses to a query within a specified time frame divided by the number of queries sent. This can be calculated per CDI professional, individual provider, or by the overall department (e.g., CDI, medical, specialty area, etc.) response rate.
Best Practice:
- Define what constitutes a response, which may or may not include responses like “Unable to Determine,” “Not my patient,” “Decline,” etc. The definitions may vary by organization; the goal is to promote consistency across the organization.
- Define the time frame that the provider is expected to respond to the query (e.g., 72 hours after issuing/launching query, 48 hours after discharge, etc.). This should align with the organization’s query escalation policies.
- Goals should align with provider contracts, bylaws, and other responsibilities.
- Define which provider is responsible to respond to the query (e.g., resident, attending, consultants, etc.) For example, if a patient is admitted to service 1 and transferred to service 2, who is responsible for an admission diagnosis query after the patient is transferred to service 2?
- Develop ongoing education regarding the response expectations. For example, determine if follow-up is needed with the way the query is posed or if the physician is rejecting the query.
- Fully understand the advancement in CDI technology, which can provide a more efficient process for providers to respond to queries.
- Trend improvement in physician responses for decrease in queries per provider due to documentation improvement.
Benchmark:
- Work with medical staff leadership to determine a realistic response rate and time frame expectations.
- Establish ongoing assessment of the response expectation to validate they are realistic with the providers workflow.
- Trend response rates by providers to determine level of compliance and areas of opportunities to improve and enhance efforts. This information can help determine if the expectations are realistic and can be used for educational purposes.
Equation: Number of queries with a response/number of queries sent.
Example:
- Provider group A responds to 30 queries (according to the organization’s definition of a response) and had a total of 35 queries sent. Provider group A has a response rate of 86 percent.
- Provider B responds to 20 of the 21 queries sent to them, within the organization’s pre-established 72-hour time frame. This provider’s response rate is 95 percent.
Query Rate
Definition: The percentage of queries issued by CDI in relation to the number of reviews performed by CDI. This may be calculated by the actual number of queries sent or by the number of health records requiring one or more queries. The query rate may also be calculated by CDI professional, service line, overall department, provider, etc. This metric can provide the organization with a snapshot of documentation quality, provider buy-in, maturity of the CDI department, and educational opportunities.
Best Practice: The query rate will vary depending on the maturity of the CDI department.
- The recommended query rate would be calculated based on the number of health records that required a query or queries to be sent. This creates a standardized expectation to support comparison across CDI departments and organizations.
- A new CDI department may have a higher query rate than a mature CDI department, where providers have developed higher quality documentation practices.
- Developing continuous education will support high-quality documentation, which may result in the need for fewer queries.
- Differentiate between prospective, concurrent, and retrospective queries.
- Differentiate computer generated and CDI/coding generated queries.
- Fully understand the advancement in CDI technology to help identify opportunities to query and how to capture the information. These can include computer-assisted coding (CAC), computer-assisted CDI (CACDI), auto-generated queries (e.g., nudges, prompts, opportunities, etc.), prioritization, and query type analysis (e.g., CC, MCC, HCC, quality, etc.).
- Develop an internal and/or external audit process regarding compliant and appropriate query practices.
- Organizations should develop clear policies and procedures regarding when a query should be sent that aligns with Official Guidelines for Coding and Reporting and the practice brief, “Guidelines for Achieving a Compliant Query Practice.”
For more information on the compliance for a query process, please see the publications, Guidelines to Achieving a Compliant Query Practice and Compliant CDI Technology Standards.
Benchmarking: Organizations and/or departments may establish a goal for the query rate. Each goal should be specific to their unique needs. At this time, there is no industry standard for the query rate. This is due to the many variables that impact the need for queries.
- A new CDI department may work with providers who have not received education on the importance and components of high-quality documentation and, thus, will require a higher number of queries.
- A mature department may work with providers who have developed stronger documentation practices, thus requiring a fewer number of queries.
- The query rate should be appropriate for the types of reviews the organization is performing. For example, if an organization is reviewing only for CC/MCC capture, the query rate may be lower than an organization that is also reviewing for quality measures.
- The use of technology can also impact the query rate as new areas of focus are added.
Equation: Number of health records that required a query(s)/number of health records reviewed.
Example:
- CDI professional A sends queries on five health records out of the 20 health records reviews. CDI A has a query rate of 25 percent.
- CDI department sends queries on a total of 60 health records out of the 200 health records reviewed. The CDI department’s query rate is 30 percent.
Provider Query Agreement Rate
Definition: Measures the frequency in which the provider agrees with the need for further documentation clarity in the health record based on queries.
Best Practice:
- The agreement rate is a beneficial metric in determining the maturity of a CDI department and CDI workflow. If the provider has a low agreement rate, this could indicate disengagement, non-compliant queries being sent, or the need for query process improvement (e.g., sending query to wrong provider, technology use, etc.). A high agreement that is consistently above the expected rate (e.g., 100 percent) may indicate that the provider is not fully reading the query or they feel they are required to always agree with the query. This may indicate a need for a compliance audit of the query response process (e.g., technology inaccurately capturing query agreement).
- Common definitions of an agreed and a not-agreed query in the CDI department’s policies and procedures should be developed upon the development of a program and reviewed on a minimum of an annual basis. Agreement rate may vary by the type of query being sent (e.g., clinical validation, open-ended, etc.). For example, if the provider documents a new diagnosis or level of specificity that was not included in a list of multiple-choice options, would this be considered an agreed query?
- Agreement rate calculations should be defined as per query or per health record. This can also be calculated by individual CDI professional, provider, service lines, or department. This information can be used to support educational efforts and CDI performance tracking.
- In order to focus on encouraging provider engagement, this metric is reported as the agreement rate, versus the opposite view that would be reported as the disagreement rate.
Benchmarking:
- Organizations and/or departments may establish a goal for the agreement rate. Each goal should be specific to their unique needs. At this time, there is no industry standard for the agreement rate. This is due to the many variables that impact the calculations for the agreement rate.
- There will be times when an agreement rate will achieve 100 percent; however, it would not be realistic to continually expect it to stay at 100 percent. CDI leadership should review agreement rate trends on a long-term basis to identify any outliers in the metric. The outliers should be defined to determine when further evaluation should be performed (e.g., a CDI professional with a consistently low agreement rate may be sending non-complaint queries; a provider who only receives two queries and agrees to one may reflect a low agreement rate for that time frame but may meet the agreement rate goal over a long-term period, etc.).
Equation: Number of queries that meet the definition of an agreed query/the number of queries sent.
Example:
- Provider A has 15 queries that meet the definition of an agreed query and had a total of 16 queries sent. Provider A has an agreement rate of 94 percent.
- CDI team B has had 60 queries that meet the definition of an agreed query on the cardiac floor, and they sent a total of 65 queries on the cardiac floor. The cardiac service line has an agreement rate of 92 percent.
CC/MCC Capture Rate
Definition: The complications/comorbidities and major complications/comorbidities (CC/MCC) capture rate measures the presence of impacting CC(s) and or MCC(s), when captured as a secondary diagnosis.
Best Practice:
- The calculation of the CC/MCC capture rate can be adjusted according to the needs of the CDI department.
- The organization may need to define Medicare Severity Diagnosis Related Groups (MS-DRGs) that may need to be excluded from the calculation. (e.g., live births, tracheostomies, transplants).
- This calculation can focus on identifying CC/MCCs that have a financial impact.
- This calculation can also review the total number of CC/MCCs reported, also known as CC/MCC depth rate.
- Some organizations may report both financial and total number (depth rate) of CC/MCCs.
- For more information regarding the capture of this metric, please refer to Using CC/MCC Capture Rates as a Key Performance Indicator.
Benchmarking: Using national and regional reports to determine realistic goals for each unique organization (e.g., Program for Evaluating Payment Patterns Electronic Reports (PEPPER)).
Equation: “The total number of cases that have ‘with CC,’ ‘with MCC,’ or ‘with CC/MCC’ in the DRG description divided by the total number of cases, and the result is expressed as a percentage” (Using CC/MCC Capture Rates as a Key Performance Indicator).
Example:
- Hospital A has a total of 1,000 cases with a CC, 500 cases with MCCs, and 200 with CC/MCCs out of 2,000 cases (adding in defined exclusions). The CC/MCC capture rate would be 85 percent.
- The overall national Medicare CC/MCC capture rate can be retrieved from organizations such as https://www.panaceainc.com/wp-content/uploads/2018/09/PanaceaCCMCCTrendStudy.pdf.
Denial Rate
Definition: The percentage of claims being denied by payers within a specific time frame. This can be further broken down by types of denials (e.g., due to documentation issues, coding, medical necessity, etc.)
Best Practice:
- Define when denied claims will be reviewed and the denial rate calculated. For example, payers may vary in the timeframe it takes to process a claim. CDI departments may want to work with their billing department to determine time frames and procedures for calculating the denial rate.
- Determine the type of denied claims that will be tracked by the CDI department. (e.g., denied claims for missing documentation, denied claims related to clinical validation, etc.)
Benchmarking: Denial rates can vary throughout the year. Payers may group denials together or spread them out over an extended time frame. These variations should be considered when determining the timing of the calculation. This may be done on an annual, biannual, or quarterly basis depending on the needs of the organization.
Equation: The number of claims denied/the number of claims submitted within the determined time frame.
Example:
- Hospital A has 10 claim denials for missing documentation out of 180 claims submitted over a one-month period. The denial rate for missing documentation is 5 percent.
- A cardiac clinic for provider A received 240 claim denials for the lack of supporting clinical evidence out of 3,000 claims submitted in the first quarter. The denial rate for the lack of supporting clinical evidence is 8 percent.
MS-DRG Reconciliation Rate
Definition: The percentage of encounters reconciled by the CDI team to determine if CDI and coding have the same MS-DRG assignment (e.g., CDI final working MS-DRG versus coding final MS-DRG). The intent of this internal metric is to allow the CDI team to view a chart from a coding perspective and identify any opportunities related to documentation and coding integrity.
Another opportunity is to measure the percentage of revised MS-DRG (the revision may be from the coding MS-DRG to something else or the CDI working MS-DRG or an entirely different DRG). For example, let’s say in the month of May, there were 56 cases where the CDI final working DRG did not match the coding MS-DRG. Then the CDI and coding specialist reviewed these cases together pre-billing, and they concluded on each case. In 40 of 56 cases (71 percent), the DRG was revised from the coding DRG to another DRG, and this DRG change had an associated financial impact that can be monitored.
Some organizations may expand upon areas of reconciliation, such as monitoring the match between working and final All Patient Refined (APR)-DRGs.
Best Practice:
- Develop a communication process for the CDI and coding teams to discuss case reconciliation.
- Define a process for case reconciliation. For example, will this be done via individual CDI professionals, peer-to-peer, second level reviewers, auditors, etc.?
- Education should be provided when themes emerge regarding the reasons for mismatches between the working and final MS-DRGs.
- Education should be provided regarding the use of technology in the MS-DRG reconciliation process to ensure an accurate calculation.
- The experience level of the CDI professional can influence the reconciliation rate. It can take several years to gain the knowledge needed to achieve a higher reconciliation rate.
Benchmarking:
- It is common for organizations to see some health records with discrepancy between the CDI working and final MS-DRGs; therefore, a 100 percent reconciliation rate may be rare.
- There are several variables that can impact the MS-DRG reconciliation rate, which may include, but are not limited to, the following:
- The CDI professional does not re-review a case and does not update the working MS-DRG based on the complete health record.
- If the assignment of the ICD-10-PCS codes is part of the CDI workflow.
- If not all diagnosis/procedures were captured in code(s) assignment that impact the MS-DRG assignment.
Equation: The number of reconciled DRGs/number of DRGs reviewed.
Example:
- The CDI department has 800 reconciled DRGs out of 1000 DRGs reviewed; the reconciliation rate is 80 percent.
- The CDI cases in the neurology unit in Hospital B has 200 reconciled DRGs out of 300 DRGs reviewed; the reconciliation rate is 67 percent.
Case Mix Index (CMI)
Definition: The average of all MS-DRG relative weights for hospital inpatient discharges per a specified patient population. This number represents the severity of illness for the patient population within the organization.
Best Practice:
- This metric may be monitored on a monthly, quarterly, or annual basis depending on the reporting structure for the organization.
- This metric may be reported by specific payers (e.g., Medicare, payers utilizing MS-DRGs) or patient population (e.g., total population, specific service, etc.).
- There are multiple variables that can impact the CMI (e.g., provider vacations, fewer surgeries, types of services being delivered, etc.); therefore, these should be taken into consideration when calculating the impact to CMI from CDI efforts.
- When comparing CMI to other organizations, it is important to compare to an organization that delivers the same services (e.g., surgeries, teaching institutions, trauma levels, etc.).
Benchmarking:
- Multiple factors may impact the case mix index. For example, when a cardiovascular and transplant surgeon is on vacation, this might decrease the monthly CMI because there weren’t as many scheduled surgical cases.
- Some organizations may want to exclude some cases that might skew the information from this calculation (e.g., tracheostomies).
Equation: Sum of MS-DRG weights/number of MS-DRGs.
Example:
- Hospital A has a total MS-DRG relative weight for their Medicare population for one month of 45, and there are a total of 40 MS-DRGs. The CMI would be 45/40 = 1.125.
- Hospital B has a total MS-DRG surgical relative weight for a specific payer population for first quarter of 150, and there are a total of 75 MS-DRGS. The CMI would be 150/75 = 2.00
More information about CMI calculations can be found in the book Principles of Healthcare Reimbursement and Revenue Cycle Management.
Risk Adjustment Factor (RAF) Score
Definition: This score is used to predict the future healthcare costs for patients. It is a score that includes the patient’s inpatient and outpatient conditions and demographic information from the preceding year. The RAF score predicts the expected cost of care for the patient (e.g., higher RAF score equals higher expected cost).
Best Practice:
- The documentation sources that influence the predicted RAF score should be clearly defined in the policies and procedures.
- Documentation should be supported by monitored, evaluate, assessed/addressed, treatment (MEAT) or treatment, assessment, monitor or medicate, plan, evaluate, referral (TAMPER™) criteria.
- Different payer risk adjustment models (e.g., CMS-hierarchical condition category (HCC), Health & Human Services (HHS)-HCC, etc.) represent different patient populations. These methodologies have differing variables and formulas that impact the overall risk adjustment factor (RAF) score.
Benchmarking:
- Multiple factors may impact the Total Risk Adjustment Factor annually. For example, the patient’s demographics (e.g., age, geographic, gender, etc.) and the documentation of the conditions annually and/or level of specificity may impact the individual RAF score, which is used to calculate the beneficiary’s annual RAF score.
- CMS-HCC model is prospective, in which all diagnoses for one year are used to predict costs for the following year. CMS has multiple models to address the different types of beneficiary population (e.g., Part C, Part D, ESRD, RXHCC, etc.)
- HHS-HCC model is a concurrent model, in which diagnoses from a specific time frame are used to predict cost.
Equation Example (will vary depending on the payer model): Individual risk adjusted factors are assigned to patient demographics + all identified HCCs risk adjusted factors (e.g., all diagnoses mapped to an HCC) = total risk adjustment factor for the beneficiary (patient)
Example:
- CMS-HCC Example for 2022 Benefit Year: Female Age 65 years old Continuing Enrollee, Institutional = 1.245 + Diabetes without Complications (HCC 19- Diabetes without Complications) = 0.178 + Major depressive disorder, recurrent, mild (HCC 59- Major Depressive, Bipolar, and Paranoid Disorders) = 0.187 = Total RAF = 1.61
- HHS- HCC Example for 2022 Benefit Year: Female Age 49 years old, Gold = 0.329 + Diabetes without Complications (HCC021-DM without Complications) = 0.360 + Severe Major Depressive Disorder (HCC088 Major Depressive Disorder, Severe, and Bipolar Disorders) = 1.249 = Total RAF = 1.938
For CY 2022, we will calculate risk scores as proposed in Part I of the CY 2022 Advance Notice. CMS will complete phasing in the model implemented in 2020, which meets the statutory requirements of the 21st Century Cures Act (Pub. L. 114-255). The 2020 CMS-HCC model (previously known as the alternative payment condition count (APCC) model) will be used with no blending for the risk score calculation. Specifically, 100 percent of the risk score will be calculated with the 2020 CMS-HCC model. References can be found here:
- https://www.cms.gov/files/document/2022-announcement.pdf
- https://www.cms.gov/Medicare/Health-Plans/MedicareAdvtgSpecRateStats/Downloads/Advance2020Part1.pdf
- Note: For CY 2023, CMS will continue to calculate 100 percent of the risk score using the 2020 CMSHCC model. Advance notice Can be found here: https://www.cms.gov/files/document/2023-advance-notice.pdf
- Final Adult Risk Adjustment Model Factors for 2022 Benefit Year can be found here: https://www.cms.gov/files/document/updated-2022-benefit-year-final-hhs-risk-adjustment-model-coefficients-clean-version-508.pdf
Key Performance Indicators Data
Once performance indicators have been established, a process should be developed for measurement, accurate data collection, and analysis. Many organizations have struggled over the years to leverage all the data that is available. For the information to be meaningful, the documentation must be reliable for accurate reporting. For many external metrics and registries, the data source is coded data from claims (e.g., ICD-10-CM codes) submitted by the organization. Providers may not be aware that their documentation is captured through many types of coded data; therefore, it is important to educate providers on documentation requirements.
Data collection is the key to measuring the key performance indicators selected for your team. It is important to select the most appropriate data collection method for the indicator in question so organizational objectives can be achieved.
There are two types of data, primary and secondary, that might be used for evaluation of performance indicators. Primary data is data collected first-hand by the department. For example, calculating the query rate for the department by a count of the number of reviews and the number of queries generated.
- Primary data collection methods are classified as quantitative or qualitative; the method used depends on the indicator being measured. Quantitative data collection is based on mathematical calculations and is what a CDI department may be most familiar. That is, numerical count of indicators used to calculate a percentage of a metric achieved. Qualitative data, on the other hand, is not related to mathematical calculations but uses open-ended questions to collect information. This type of collection might be used when interviewing providers or staff regarding their specific educational needs.
- Secondary data is collected by another source and may be useful in evaluating a CDI department, depending on the metric being measured. For example, this can be used to track changes in the mortality index for a CDI department from data collected and reported by an outside entity. The CDI department can directly affect this secondary data by ensuring the documented and coded information is accurate and to the highest level of specificity.
CDI professionals are a valuable part of the KPI data analysis process, education development, and delivery of findings. The process of data collection in CDI departments has been facilitated in recent years by the use of technology and CDI software that provides tools for collection and analysis. The use of these tools should be clearly stated. Professionals working with the data should have a clear understanding of the outcomes and possible deliverables from the data. Consistency, timeliness, and accuracy are key to having quality and reliable data used to make decisions regarding the performance of the department and the individual CDI professional. A job aide may be helpful to ensure the definitions of the data and collections methods are uniform across the department/organization. Policies and procedures should clearly reflect the data collection and entry process. Adopting the KPIs, as discussed in this practice brief, can support clear definitions to eliminate misinterpretation.
Data analysis is the next step in the process. This may begin with a summarization to identify any outliers and/or gaps in the data entry process, followed by an analysis of data trends. This information can then be used to guide decision-making and to promote growth, educational opportunities, and/or expansion of the CDI department.
Reporting data requires that it is presented in a manner that can be easily explained and comprehended. Different delivery methods may be used depending on the intended audience and frequency of disseminating the information. Software can facilitate the organization of charts, tables, grafts, and other aesthetic formats.
Desired Outcomes
Accurate Code Classification for Reporting
The health record documentation needs to be specific and comprehensive for accurate code assignment, which may impact revenue. The documentation in the health record needs to be consistent and reflect the highest level of specificity that is supported by the clinical evidence. Concurrent CDI review can be key in clarifying the health record documentation while the patient is still being treated, which helps avoid post-discharge queries.
CDI reviews should focus on the elements of high-quality clinical documentation and not focus solely on one element such as CC, MCC, HCC, etc. It is important that each health record represents a clear, consistent, and accurate clinical picture. This includes the capture of all clinically supported acute and chronic conditions, which will capture medical necessity, quality measures, and assist in denials management.
Complication clarification is also a key indicator for a CDI review. It is important to validate possible complications prior to coding and data submission. When documentation is ambiguous, a complication could be coded and reported inappropriately. For example, HACs (Hospital Acquired Conditions), PSI (Patient Safety Indicators), and surgical complications may impact reimbursement, quality scores, and provider/organization profiles.
Performance Improvement
Recognizing the purpose and intent of KPIs helps provide the framework for performance improvement. Performance improvement should be seen as a continuous opportunity to influence and positively affect outcomes. Collaboration with key stakeholders will help develop KPIs that promote accountability, reflect performance, and align with organizational goals. It is important that CDI leaders have a good comprehension of the reporting functions built within their systems and/or utilized in their CDI processes. Leveraging reporting functions and collaborating with key stakeholders, CDI leaders can measure and identify opportunities to enhance and expand CDI initiatives and process improvement. Successful performance improvement initiatives may include (but are not limited to) proactive, data-driven decisions to ensure the financial health of the organization and are compliant with industry guidance.
Registries and Other CDI and Coding Areas of Opportunities
Registries are databases used to collect disease-specific data for public health or quality improvement purposes. Although registries can serve many purposes, they are classified according to how their populations are defined. Generally, for disease-based types of registries, cases are selected by principal diagnosis and/or procedure based on specific inclusion requirements per each registry criteria. Selected cases are clinically validated to ensure accuracy of coding prior to case abstraction.
Presently, there is no specific way to determine whether there is a direct CDI impact on KPIs within registries. However, CDI can assist indirectly and impact payment methodologies by ensuring providers report the correct diagnoses, procedures, and other complications to ensure that patients are accurately included or excluded from measure cohorts (Vahey & Wilk, 2020, pp.152). Each facility should identify all registries that impact their data collection and submission and provide guidance to the CDI department regarding their role in this initiative. Examples of these registries are tumor registries, trauma registries, pacemaker registries, etc.
The interrelationship between clinical registries and performance measures will become even more important as risk-adjusted outcome data are used for high-stake applications such as public report cards, preferred provider networks, and reimbursement (Bhatt DL, Drozda JP, Shahian DM). Future efforts toward interprofessional collaboration of CDI, coding, and quality areas may provide for a continuous positive feedback loop mechanism in a more direct way in the revenue cycle to improve key performance metrics.
Engagement of Providers
Providers must be essential participants in changing the system to improve outcomes. They can’t navigate modern healthcare reimbursement and delivery requirements without support. Providers want—and need—relevant and concise documentation guidance that doesn’t detract from their face-to-face time with patients. Guidance may include the utilization of unique and creative approaches to address today’s quality-driven initiatives; uniting documentation and coding across the healthcare continuum to enhance data integrity; as well as optimizing return on investment and long-term compliance.
Achieving high levels of provider engagement can prove difficult for many healthcare organizations. Using peer-to-peer or specialty-specific education methods may increase provider engagement. The physician response rate may be used as an indicator to access provider engagement. Providers that respond to queries in a thoughtful and timely manner may be seen as more engaged than one that does not respond to queries, continually declines queries, or responds with a non-codable answer.
A physician advisor (PA) is integral to the success of CDI efforts. Leading organizations indicate that communication with a physician champion/advisor validates the strength of peer-to-peer engagement.
As an extension of a CDI department, a strong PA can advance clinical coding insights not only to reinforce CDI and coding but also to enhance provider engagement, increase query response and acceptance rates, and foster long-term success. Additionally, PAs contribute to revenue growth and denial reduction.
Summary
KPIs will vary by organization; however, data collection and data entry should be clearly defined within the organization. The key to data accuracy and consistency is educating all who are involved in the collection and entry process. It is important that all CDI professionals have a solid knowledge base in the CDI KPI process and impact areas to ensure all information is captured accurately. An audit process should be established to verify the consistency and validity of the data entry. Tracking the effectiveness of CDI KPI will bring valuable information to the organization and the industry to continually promote and advance the quality of health record documentation.
References
- Bhatt DL, & Drozda JP, Shahian DM. The Future of Registries and Performance Measures. Accessed on March 19, 2022. https://www.acc.org/latest-in-cardiology/ten-points-to-remember/2015/10/01/13/15/acc-aha-sts-statement-on-the-future-of-registries
- Best Practices for Data Analytics Reporting Lifecycles: Quality in Report Building and Data Validation. AHIMA (2018)
- CMS.gov (2022). MEDPAR Limited Data Set (LDS) – Hospital (National) page. Accessed on January 19, 2022. https://www.cms.gov/Research-Statistics-Data-and-Systems/Files-for-Order/LimitedDataSets/MEDPARLDSHospitalNational#:~:text=The%20Medicare%20Provider%20Analysis%20and,billing%20number%20identifies%20the%20hospital.
- CMS.gov (2022). Data Analysis Support and Tracking. Access on January 16, 2022. https://www.cms.gov/Research-Statistics-Data-and-Systems/Monitoring-Programs/Data-Analysis.
- Q2FY21 Short-Term Acute Care PEPPER Webinar Review (2021). Accessed on January 16, 2022. https://pepper.cbrpepper.org/Portals/0/Documents/PEPPER/ST/q2fy21-st-pepper-review-508.pdf.
- CMS – place holder for Medicare beneficiary distribution volumes
- Freed, M., Biniek, J.F.., Damico, A. and Neuman, T. (2021). “Medicare Advantage in 2021: Enrollment Updates and Key Trends.” Accessed on January 19, 2022. https://www.kff.org/medicare/issue-brief/medicare-advantage-in-2021-enrollment-update-and-key-trends/.
- CMS.gov (2022). BPCI Advanced. Accessed on January 18, 2022. https://innovation.cms.gov/innovation-models/bpci-advanced.
- Physicians Practice: Steps to improve physician engagement; 3/24/2020 Katherine P. Redmond
- Mastering Physician Engagement: A Practical Guide to Achieving Shared Outcomes; John Showalter, Leigh T. Williams
- Physician Engagement and Creating Time to Care; ACDIS leadership survey 2020
- https://journal.ahima.org/understanding-cdi-metrics/
- The AAOS Registry Program. AAOS. . January 9, 2022 Available from: https://www.aaos.org/registries/
- Advancing Value-Based Models for Heart Failure. (2020). Cardiovascular Quality and Outcomes. 2020; 13:e006483. Available from: https://www.ahajournals.org/doi/10.1161/CIRCOUTCOMES.120.006483
- Chavis S. (2021). KPIs Shift with the Times. For The Record, 33(6), 10–13. https://www.fortherecordmag.com/archives/ND21p10.shtml
- Get With The Guidelines. AHA. January 9, 2022 Available from: https://www.registrypartners.com/registry/get-with-the-guidelines/
- National Program of Cancer Registries (NPCR). CDC. January 9, 2022 Available from: https://www.cdc.gov/cancer/npcr/index.htm.
- Vahey, A., & Wilk, D. (Eds.). (2020). CDI and Quality Reporting: How Healthcare Record Review Can Improve Outcomes. HCPro, a Simplify compliance brand.
- BYJU’s Classes (2021). Data Collection Methods. Retrieved from Data Collection Methods | Methods of Primary and Secondary Data (byjus.com)
- King, M. (2021). 6 Key differences Between Data Analysis and Reporting. Retrieved from: 6 Key Differences Between Data Analysis and Reporting | Databox Blog
- Research Evaluation Consulting (2021). Retrieved from: 6 Tips to Collect Quality Data - Research Evaluation Consulting
- https://journal.ahima.org/understanding-cdi-metrics/
Acknowledgments
Laritha Boone, MBA-HCM, RHIA
Patricia Buttner, MBA/HCM, RHIA, CDIP, CHDA, CPHI, CCS, CICA
Tammy Combs RN, MSN, CDIP, CCS, CNE
Liz Curtis, MA, RHIA, CHPS, FAHIMA
Angie Comfort, MBA, RHIA, CDIP, CCS, CCS-P, CICA
Jean Allen Ellington, RHIA
Margaret M. (Maggie) Foley, PhD, RHIA, CCS
Shannon Houser, PhD, MPH, RHIA, FAHIMA
Alina Hughes, MHA, RHIA, CDIP, CCS
Lisa Hunter
Suzy Johnson, MS, RHIA, CHPS
Katherine Kozlowski, RHIA, CCS, CDIP
Kathleen Peterson, MS, RHIA, CPHI, CCS
Reba Sanders, RHIA
Gina Sanvik, MS, RHIA, CCS, CCS-P
Michael Stearns, MD, CPC, CRC, CFPC
Aerian Tatum, DBA, MS, RHIA, CCS, CPHIMS
Anita Whelan, RHIT, MBA