Matrix and Vector BY Harvard’s Markova Processes 1. Introduction Before we give the definition of a Markova process, we will look at an example: Example 1: Suppose that the bus readership in a city is studied. After examining several years of data, it was found that 30% of the people who regularly ride on buses in a given year do not regularly ride the bus in the next year. Also it was found that 20% of the people who do not regularly ride the bus in that year, begin to ride the bus regularly the next year. If 5000 people ride the bus and 10,000 do not ride the bus in a given year, what is the distribution of riders/non-riders in the ext year?
For computing the result after 2 years, we Just use the same matrix M , however we use b 9500 in place of x. Thus the distribution after 2 years is M b = M 2 x. In fact, after n years, the distribution is given by M n x. The forgoing example is an example of a Markova process. Now for some formal definitions: Definition 1. A stochastic process is a sequence of events in which the outcome at any stage depends on some probability. Definition 2. A Markova process is a stochastic process with the following properties: (a. ) The number of possible outcomes or states is finite. (b. The outcome t any stage depends only on the outcome of the previous stage. (c. ) The probabilities are constant over time. If ox is a vector which represents the initial state of a system, then there is a matrix M such that the state of the system after one iteration is given by the vector M ox . Thus we get a chain of state vectors: ox , M ox , M 2 ox , .. . Where the state of the system after n iterations is given by M n ox . Such a chain is called a Markova chain and the matrix M is called a transition matrix. The state vectors can be of one of two types: an absolute vector or a probability vector.
An absolute vector is a sector whose entries give the actual number of objects in a give state, as in the first example. A probability vector is a vector where the entries give the percentage (or probability) of objects in a given state. We will take all of our state vectors to be probability vectors from now on. Note that the entries of a probability vector add up to 1. 1 Theorem 3. Let M be the transition matrix of a Markova process such that M k has only positive entries for some k. Then there exists a unique probability vector xx such that M xx =xx . Moreover M kook=xx for any initial state probability vector ox .
The vector xx is called a the steady-state vector. 2. The Transition Matrix and its Steady-State Vector The transition matrix of an n- state Markova process is an n x n matrix M where the I, J entry of M represents the probability that an object is state J transitions into state I, that is if M = (mix ) and the states are SSL , , Sin then mix is the probability that an object in state SO transitions to state Is . What remains is to determine the steady-state vector. Notice that we have the chain of equivalences: M xx =xx M xx -xx = O e M xx – Los = O (M – l)xx = O ex. e N (M – l) Thus xx is a vector in the unlaces of M – l.