The approach taken in the Haskell’s probability library makes it especially convenient for modeling Markov chains.
Suppose is a Markov chain on a state space . Distribution of the process is determined by the distribution of and the numbers , , called transition probabilities. For simplicity let’s consider only time-homogeneous Markov chains where these numbers don’t depend on and let’s assume that the initial distribution is a point mass, i.e. for some . Then we can define a function , where for each the value is a distribution on defined by . Here denotes the set of finitely supported nonnegative functions on that sum up to 1. Conceptually, is the the distribution of the next value of the chain given the current value is . This is exactly the kind of distribution-valued function that is the right hand side operand of the operation from the previous post, which corresponds to the Haskell’s
>>= operation in the probability monad.
The probability library’s Transition module contains a funtion called “compose” that allows to obtain the final distribution of a Markov chain given a list of transition functions and some initial value. Let’s consider a random walk as an example. Suppose we start from 0 and in each step we go up or down by 1 with the same probability. After importing Numeric.Probability.Distribution as D and Numeric.Probability.Transition as Tran we define the transition probabilities
rwtran :: Tran.T Double Int
rwtran x = D.choose 0.5 (x+1) (x-1)
Tran.T is the type of transition functions, that is functions that assign distributions on some type (
Int in this case) to values from that type.
Now the final distribution of values after steps of this Markov chain starting from initial value can be calculated by
rw :: Int -> Int -> D.T Double Int
rw v n = (Tran.compose $ replicate n rwtran) v
replicate n rwtran creates a list of (identical) transition functions that is magicaly composed by
compose into one transition function that takes us all the way from the initial value to the final distriibution. From GHCi the distrubution after 5 steps looks like this:
*ProbLib> rw 0 5 fromFreqs [(-1,0.3125),(1,0.3125),(-3,0.15625),(3,0.15625),(-5,3.125e-2),(5,3.125e-2)]
Using the ?? operator from the Distribution module we can ask about probability of some event, like what is the probability that after 10 steps the value is positive.
*ProbLib Numeric.Probability.Distribution> (>0) ?? (rw 0 10) 0.376953125
Sometimes the questions we want to ask are not about the the properties of the final random variable of a Markov chain, but about the whole path . For example suppose we want to know the probability that the random walk is never (up to n steps) negative. One way to do that is to consider another Markov chain whose state space is the set of sequences
The transition function takes a sequence of length and assigns a distribution on sequences of length in the obvious way. For our random walk example in Haskell we could write the following transition function
rwptran :: Tran.T Double [Int]
rwptran x = D.choose 0.5 (x ++ [(last x) + 1]) (x ++ [(last x) - 1])
And the distribution on the path space after n steps is calculated by
rwp :: Int -> Int -> D.T Double [Int]
rwp v n = (Tran.compose $ replicate n rwptran) [v]
For this particular process the distribution on the path space is not very interesting by itself as each path has the same probability.
*ProbLib Numeric.Probability.Distribution> rwp 0 3 fromFreqs [([0,-1,-2,-3],0.125),([0,-1,-2,-1],0.125),([0,-1,0,-1],0.125), ([0,-1,0,1],0.125),([0,1,0,-1],0.125),([0,1,0,1],0.125),([0,1,2,1],0.125), ([0,1,2,3],0.125)]
However, now we can ask questions related to paths. For example, what is the probability that in 10 steps we never go negative?
*ProbLib Numeric.Probability.Distribution> (all (>= 0)) ?? (rwp 0 10) 0.24609375