Markov Analysis

A Markov Chain is described by a transition matrix that gives the probability of going from state to state. For example, consider the following:
From/To State 1 State 2 State 3
State 1
State 2
State 3
.7
.05
.05
.1
.85
.05
.2
.1
.9

If we are in state 1, there is a 70% chance that we will be in state 1 at the next stage, a 10% chance that we move to state 2, and a 20% chance that we move to state 3. There are essentially two types of questions that need to be answered for Markov Chains. One is: Where will we be after a small number of steps? The other is: Where will we be after a large number of steps? Often times this depends on the state in which we start.

The data screen for this example is shown next. The first column ('initial') is indicating that we have an equal chance of starting in any of the three states. This column does not have to contain probabilities as we show in example 2. The extra data above the data (number of transitions) indicates that we want to look at the results after 3 transitions.

Results

The results screen contains three different types of answers. The top 3 by 3 table contains the three step transition matrix (which is independent of the starting state). The next row gives the probability that we end in state 1 or 2 or 3, which is a function of the initial state probabilities. The last row gives the long-run probability (steady state probability) or the percentage of time we spend in each state.

The following screen displays the multiplications through three transitions (as requested in the extra data box above the data).



Example 2 - A complete analysis

Consider the Markov Chain that is displayed next. The chain consists of three different types of states. State 1 is absorbing, states 3 and 4 together form a closed, recurrent class, while state 2 is transient. Furthermore, we are indicating that at the beginning of this problem there are 60, 80,100 and 60 (total = 300) items in states 1, 2, 3 and 4 respectively. As previously stated, the initial column does not have to contain probabilities.

The first output table is, as before, describing long-run behavior. The top of the table contains the long run probabilities. The ending number row indicates the expected number (in the statistical sense) of how many of the original 300 items will end up in each state. In this example, we know that the 60 that start in state 1 will end in state 1 and the 160 that started in states 3 and 4 will end in those states (divided evenly). Of the 80 that start in state 2, 28.57% (22.857) will end up in state 1 while the others will be split evenly over states 3 and 4.

The bottom row of steady state probabilities all need to be interpreted as conditional on the closed recurrent class that the states are in. For example, first note that they do not sum to 1. These classes are identified in a second output screen as shown below.

Finally, there is one more output screen. This screen contains the usual Markov matrices that are generated when performing a Markov Chain analysis. The top matrix is a sorted version of the original Markov Chain. It is sorted so that all states in the same recurrent class are adjacent (see states 3 and 4) and so that the transient states are last (state 2).

The B matrix is the subset of the original matrix consisting of only the transient states.

The F matrix is given by the equation

F= (I-B)-1

where I is the identity matrix.

Finally, The FA matrix is the product of the F matrix and the matrix formed by cells that represent going from a transient state to any nontransient state.