# Using Decision Trees to Explore EHR Options

In the August 2010 print edition, Tiankai Wang and Sue Biedermann applied the cost-benefit analysis model to EHR adoptions. Here the authors describe the decision tree model, another tool to aid in the complex considerations that come with choosing to implement—or delay implementing—an EHR system.

Tiankai Wang, PhD, is an assistant professor in the health information management program at Texas State University in San Marcos, TX. Sue Biedermann, MSHP, RHIA, is an associate professor and chair of the health information management program at Texas State University.

* * *

The federal meaningful use program offers incentives to professionals and hospitals who adopt and “meaningful use” electronic health records. Although the incentives help offset the costs of purchasing and implementing an EHR, they have not simplified the many decisions providers must make in choosing how and when to begin.

The decision procedure is complex because providers face so many options. They must choose between building and buying a system, and if they buy a system, they must choose among myriad systems and components. They may consider leasing an EHR, and if they do, they must plan ahead to the end of the contract term.

A decision tree is a powerful tool that can aid in decision making. It provides an effective structure within which to explore alternatives and investigate the possible outcomes of each. A decision tree helps form a balanced picture of the risks and rewards associated with each possible outcome.

Decision trees are especially useful when outcomes are uncertain, which make them especially useful in considering EHRs, where uncertainty is dominant.

##### A Statistical Prerequisite

Understanding and applying a decision tree model requires a brief refresher on statistics.

An expected value (or expectation value, or mathematical expectation, or mean, or first moment) is the mean of a probability distribution. When the distribution is discrete, an expected value can be calculated in the following way:

E = ∑(ap)

E indicates the expected value;

a denotes the outcome of an event;

p denotes the probability the event happens (p is a number between 0 and 1, and the sum of all probabilities in one issue is equal to 1); and

∑ denotes the sum.

Take for example, wagering on a coin toss. A player will win \$1 if the coin comes up heads and lose \$1 if it comes up tails. What is the expected value in this game?

E = ∑(ap)

= a1p1 + a2p2

= 1 × 0.5 + (-1) × 0.5 = 0

The expected value is \$0.

##### Processing a Simple Decision Tree

Constructing a decision tree begins by sketching the logical relations between the decision, the available alternatives, and their resulting events. In drawing this diagram, a square represents a decision node, where the decision has to be made and the inferior branches will be cut. A circle represents an event node, where an expected value must be calculated. Doing nothing is an alternative.

A decision tree is sketched from the left to the right, beginning with the square—the decision that one needs to make. One then sketches branches that represent potential alternatives, following the nodes. The branches represent all potential alternatives with resulting outcomes for each alternative.

Decision trees are not limited to financial decisions; however, they require that a numerical value be assigned to each potential outcome.

The example that follows in this article is a simple one, intended to provide an overview of the decision tree process. The probabilities and values provided are not intended to represent actual experience.

In the decision tree shown in figure 1 a small hospital faces the decision of whether to purchase an EHR immediately. The square on the left is the problem. It has two alternatives, drawn as branches from the root square. One is to purchase an EHR immediately; the other is to maintain the status quo. Should the hospital choose to purchase an EHR immediately, it can foresee two potential outcomes: the EHR may run well or it may not. As for the other alternative, the status quo, only one event exists.

Next the hospital must estimate the outcome of each potential event and the probability of this event happening. To continue with the example begun in figure 1, the hospital estimates an EHR that runs well will result in a net positive outcome of \$300,000. It estimates the probability of the EHR running well to be 0.7.

If the EHR does not run well, however, the hospital estimates it will suffer a loss of \$500,000. It estimates this probability to be 0.3. If the hospital keeps the status quo, it has no gain or loss. These additions to the tree are shown in figure 2. Next, the hospital will analyze and cut the branches from the right to the left.

Beginning with the option of purchasing an EHR immediately, the organization multiplies the probability of the EHR running well by the expected outcome (0.7 x \$300,000 = \$210,000) and adds it to the probability that the EHR does not run well multiplied by that expected outcome (0.3 x -\$500,000 = -\$150,000). The expected value is \$60,000, shown in figure 3. The nature of decision trees allows the organization to work through multiple scenarios. If the organization adjusts its estimated probability of success to 50-50, it foresees a very different outcome. By adjusting the probabilities to 0.5 and 0.5, the organization will forecast a loss of \$100,000, shown in figure 4. Therefore, if it cannot expect the probability of the EHR running well to be better than 0.5, it should choose the status quo until it can project a higher rate of success. Similarly, different estimates for expected outcomes also impact the organization’s final decision.

##### Managing Estimates

The accuracy and reliability of estimates are vital to the success of the decision tree. Estimating the outcomes and probabilities is the most difficult—and controversial—aspect of the process.

Generally, quantitative predication models, such as regression models, are used to improve the accuracy and reliability of the estimation. However, quantitative models mainly rely on historical data. In the case of EHR adoptions, the historical data are limited because many EHR products are new to the market.

Currently, organizations use a request for information and request for proposal to gather information about EHRs. During this phase, the organization should discuss how the system is working in other facilities both with the vendors and existing users.

One serious limitation to this method is that each provider is different. The system that ran well in one setting may not be effective in another, and the reasons may not be easily apparent. However, user interviews are important, and the more information the organization gathers, the more accurate its estimate will be. EHR experts and reputable research are another source that can inform estimates.

Closely defining the organization’s own objectives and expectations also is important in creating good estimates. Clearly it is too imprecise to evaluate an EHR on whether it “runs well” or not. In many cases, an EHR matches partial needs of a provider; that is, the probability of matching partial needs is higher than those of matching all needs or matching no needs. Moreover, many factors influence an EHR’s success, including human factors such as leadership.

##### Branching out the Additional Outcomes

Figure 5 depicts a more realistic and complex set of outcomes. In this scenario, there are three outcomes for adopting an EHR—the system matches full needs, matches partial needs, and matches no needs. The probabilities are set as 0.3, 0.6, and 0.1, respectively. These outcomes are explored in further branches. In cases of matching full or partial needs, the extent to which staff members adopt and use the EHR further diversifies the outcomes. This applies all users, both clinical and administrative.

Depending on the EHR’s performance, the efficiency in utilization should differ. The more needs the system meets, the more efficiently staff will use it. If the system matches few needs, staff will use it inefficiently, or not at all.

Notice that in this scenario the hospital will suffer a loss if it does nothing–an outcome of the meaningful use program, which will eventually penalize providers who have not adopted EHRs. The hospital estimates this loss to be \$10,000.

To analyze the tree, the hospital follows the criterion at each event node. For example, the expected values at nodes A and B are:

E (A) = (0.7 x \$150,000) + (0.3 x -\$100,000) = \$75,000

E (B) = (0.6 x \$80,000) + (0.4 x -\$120,000) = \$0

The branch “adopt an EHR” can be simplified as shown in figure 6. The expected value of this branch “adopt an EHR” is:

E (Adopt an EHR) = (0.3 x \$75,000) + (0.6 x \$0) + (0.1 x -\$200,000) = \$2,500.

Or, the hospital can apply the multiplication principle in statistics to calculate the outcomes in each branch in figure 5 as follows:

Branch (1) = 0.3 × 0.7 × \$150,000 = \$31,500

Using this same calculation, the other branches are as follows: branch (2) = -\$9,000; branch (3) = \$28,800; branch (4) = -\$28,800; and branch (5) = -\$20,000.

The expected value of the branch “Adopt an EHR” is the sum of these five outcomes: \$2,500.

The status quo, with an expected outcome of -\$10,000, is inferior to adopting an EHR. Therefore, this branch is cut. The hospital chooses to adopt an EHR.

##### Drawing out the Decisions within Decisions

In some scenarios, the final decision depends on a further level of options within the decision procedure. Decision trees can accommodate additional levels easily.

In the decision tree shown in figure 7 the outstanding differences are the two subdecision nodes on the “lease” branch. They represent that after the lease contract with an ASP, the provider must decide its next step according to the EHR’s performance. When the EHR works well, the provider can purchase it or continue leasing. If the EHR works partially, the provider can seek to improve it through the vendor, keep it as is, or switch to another EHR. Similarly, for each alternative, different outcomes exist. In this scenario, the hospital treats each subdecision node (the square on the branches) independently as a decision tree problem. These are illustrated in the diagram within the dotted-line boxes.

On the branch of “Lease: Works well,” “Continue leasing” is cut.

On the branch “Lease: Partially,” the expected value of “Improve” is (0.4 x \$100,000) + (0.6 x \$10,000) = \$46,000. Therefore, both “Keep as is” and “Give up and switch” are cut. Now, the decision tree is simplified as shown in figure 8. The hospital can now calculate the expected value of the two main branches:

E (Purchase) = (0.3 x \$100,000) + (0.6 x \$10,000) + (0.1 x -\$300,000) = \$6,000

E (Lease) = (0.3 x \$90,000) + (0.6 x \$46,000) + (0.1 x -\$450,000) = \$9,600

Since the expected outcome of purchasing is inferior to the expected outcome of leasing, the hospital can draw the conclusion that leasing is its best option.

##### Statistical Methods to Aid Estimating Outcomes

As noted, one limitation of the decision tree model is that the final decision relies on the accuracy of the data. It requires quantitative input to give a complete picture. In reality, many outcomes are difficult to be quantified. Cost-benefit analysis proposes a method to estimate the outcome (the net present value, or NPV) for each alternative.

Another limitation of the decision tree model is that the probability can only be estimated. This raises doubts as to its accuracy. Using interval estimation to conduct a sensitivity analysis will improve reliability.

For example, in figure 1, instead of estimating the outcome as \$150,000 and probability as 0.7 in outcome of “the EHR running well,” the organization can estimate that the outcome is within a reasonable range from \$120,000 to \$180,000 and the probability is between 0.55 and 0.85. The best outcome for branch (1) becomes \$153,000; the worst outcome becomes \$66,000. The sensitivity analysis can significantly improve the model’s reliability and accuracy. The examples in this article use point estimation for ease of illustration.

# 1 Comment

1. Thank you! This is a wonderful tool!