Markov chains

The approach taken in the Haskell’s probability library makes it especially convenient for modeling Markov chains.
Suppose  \{X_n\}_{n \in \mathbb{N}} is a Markov chain on a state space S. Distribution of the process \{X_n\}_{n \in \mathbb{N}} is determined by the distribution of X_0 and the numbers Pr(X_{n+1}=y | X_n = x), x,y \in S, n \in \mathbb{N}, called transition probabilities. For simplicity let’s consider only time-homogeneous Markov chains where these numbers don’t depend on n and let’s assume that the initial distribution is a point mass, i.e. X_0 = x_0 for some x_0\in S. Then we can define a function f: S \rightarrow Dist(S), where for each x \in S the value f(x) is a distribution on S defined by (f(x))(y) = Pr(X_{n+1}=y | X_n = x), x,y \in S, n \in \mathbb{N}. Here Dist(S) denotes the set of finitely supported nonnegative functions on S that sum up to 1. Conceptually, f(x) is the the distribution of the next value of the chain given the current value is x. This is exactly the kind of distribution-valued function that is the right hand side operand of the \leadsto operation from the previous post, which corresponds to the Haskell’s >>= operation in the probability monad.

The probability library’s Transition module contains a funtion called “compose” that allows to obtain the final distribution of a Markov chain given a list of transition functions and some initial value. Let’s consider a random walk as an example. Suppose we start from 0 and in each step we go up or down by 1 with the same probability. After importing Numeric.Probability.Distribution as D and Numeric.Probability.Transition as Tran we define the transition probabilities


rwtran :: Tran.T Double Int
rwtran x = D.choose 0.5 (x+1) (x-1)

Tran.T is the type of transition functions, that is functions that assign distributions on some type (Int in this case) to values from that type.
Now the final distribution of values after n steps of this Markov chain starting from initial value v can be calculated by


rw :: Int -> Int -> D.T Double Int
rw v n = (Tran.compose $ replicate n rwtran) v

Here replicate n rwtran creates a list of n (identical) transition functions that is magicaly composed by compose into one transition function that takes us all the way from the initial value to the final distriibution. From GHCi the distrubution after 5 steps looks like this:

*ProbLib> rw 0 5
fromFreqs [(-1,0.3125),(1,0.3125),(-3,0.15625),(3,0.15625),(-5,3.125e-2),(5,3.125e-2)]

Using the ?? operator from the Distribution module we can ask about probability of some event, like what is the probability that after 10 steps the value is positive.

*ProbLib Numeric.Probability.Distribution> (>0) ?? (rw 0 10)
0.376953125

Sometimes the questions we want to ask are not about the the properties of the final random variable X_n of a Markov chain, but about the whole path X_1, X_2, ..., X_n. For example suppose we want to know the probability that the random walk is never (up to n steps) negative. One way to do that is to consider another Markov chain \{ X^*_n\} whose state space is the set of sequences S^* = \bigcup_{n > 0} (n\rightarrow S)
The transition function takes a sequence (x_0, x_1,...,x_{n-1}) of length n and assigns a distribution on sequences of length n+1 in the obvious way. For our random walk example in Haskell we could write the following transition function


rwptran :: Tran.T Double [Int]
rwptran x = D.choose 0.5 (x ++ [(last x) + 1]) (x ++ [(last x) - 1])

And the distribution on the path space after n steps is calculated by


rwp :: Int -> Int -> D.T Double [Int]
rwp v n = (Tran.compose $ replicate n rwptran) [v]

For this particular process the distribution on the path space is not very interesting by itself as each path has the same probability.

*ProbLib Numeric.Probability.Distribution> rwp 0 3
fromFreqs [([0,-1,-2,-3],0.125),([0,-1,-2,-1],0.125),([0,-1,0,-1],0.125),
([0,-1,0,1],0.125),([0,1,0,-1],0.125),([0,1,0,1],0.125),([0,1,2,1],0.125),
([0,1,2,3],0.125)]

However, now we can ask questions related to paths. For example, what is the probability that in 10 steps we never go negative?

*ProbLib Numeric.Probability.Distribution> (all (>= 0)) ?? (rwp 0 10)
0.24609375
Advertisements

Tags: , ,

One Response to “Markov chains”

  1. Markov chains – simulation « Formalized Mathematics Says:

    […] my post on Markov chains I defined a transition function on paths for a simple random walk that goes up or down by one with […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: