Decision trees are used when sequences of decisions are to be made. The trees consist of branches that connect either decision points, points representing chance, or final outcomes. The probabilities and profits or costs are entered and the decisions that should be made and the values of each node are computed. All decision tables can be put in the form of a decision tree. The converse is not true.
The decision tree model
The general framework for decision trees is given by the number of branches or the number of nodes in the tree. The number of branches is always one less than the number of nodes. Each node always has exactly one branch going into it. The number of branches going out of any node can be 0,1 or 2,... The nodes are of three types. There are decision nodes, chance nodes and final nodes. Typically, the decision nodes are represented by rectangles and the chance nodes are represented by circles.
Example
Our first example is given by a typical decision tree diagram. The figure has 12 branches. Profits are to the right of the terminal nodes and notice that there is a $100 cost in the middle for selecting a certain (market research) branch.
In order to use the decision tree module two things must occur. First, nodes must be added to the right of the ending branches. (Technically, it is illegal to draw a tree that ends with branches rather than nodes). Second, the nodes must be numbered. The figure that follows shows the added nodes and the fact that all nodes have been given numbers.. The most convenient way to number the nodes is from left to right and top to bottom.
The initial data screen is generated by answering that there are 10 branches and that we wish to maximize profits. The following screen contains both the data and the solution.
Start and end node. Branches are characterized by their start and end nodes. An added branch named 'start' appears in order to represent the final outcome. The node values are shown in the far right column. In this example the value of the decision tree is $525.
Branching probabilities. These occur in column 4 and are the probability of going from the start node on the branch to the end node. The probabilities out of an individual chance branch should sum to 1.
Profits or costs. The profit (cost) for each ending node that is terminal is to be entered. In addition, it is possible to enter a profit or cost for any branch. For example, notice that in branch 10 (node 6 to 11) we have entered a cost of $100 by placing -100 in that cell.
The solution data are:
Branch use. For those branches that are decision branches and should always be chosen, an "Always" is displayed. In our example, we should choose (1-3) rather than (1-2). For those branches that we should choose if we get there we display "Possibly". For example, if we get to node 6 we should select (6-9) rather than (6-8). However, there is no guarantee that we will get to node 6 due to the probabilistic nature of the decision tree. The last type of branch is one that we should select if we get there but we should not get there. These are marked as 'Backwards'. Look at branch 7 (node 4 to node 8). If we get to node 8 we should use this branch. However, since we will select 1 to 3 at the beginning we should not end up at node 4.
Ending node. The ending node is repeated to make the output easier to read.
Ending node type. For each ending node the program identifies it as either a final node, a decision node or a chance node.
Expected value. The expected value for each node is listed. For final nodes the expected value is identical to the input. For chance nodes the expected value is the weighted combination of the values of the nodes that follow. For decision nodes the expected value is the best value available from that branch. Both chance nodes and decision nodes will have any costs subtracted from the node values. For example, the value of node 11 is $550. However, the value of node 6 is $450 due to the $100 cost of going from node 6 to node 9.
A graph of the tree structure can be displayed by the program.