Markov chains – simulation

I have to say it took me considerable effort to figure out how to use the part of Haskell’s probability library that deals with randomness and simulation. The intuition developed by programming in imperative languages is especially harmful here.
The paper on which the library is based mentions a function “pick, which selects exactly one value from a distribution by randomly yielding one of the values according to the specified probabilities”. Great, let’s try that out and toss a coin with it.

> :m +Numeric.Probability.Distribution
> :m +Numeric.Probability.Random
> pick $ choose 0.5 0 1

:1:0:
    No instance for (Show (Numeric.Probability.Random.T a))
      arising from a use of `Prelude.print' at :1:0-20
    Possible fix:
      add an instance declaration for
      (Show (Numeric.Probability.Random.T a))
    In the expression: Prelude.print it
    In a 'do' expression: Prelude.print it

A closer analysis of the type signatures reveals that the type of the library function pick is not IO a as in the paper and to get IO from that we need to compose it with run.

> run $ pick $ choose 0.5 0 1
Loading package array-0.1.0.0 ... linking ... done.
Loading package containers-0.1.0.1 ... linking ... done.
Loading package old-locale-1.0.0.0 ... linking ... done.
Loading package old-time-1.0.0.0 ... linking ... done.
Loading package random-1.0.0.0 ... linking ... done.
Loading package mtl-1.1.0.0 ... linking ... done.
Loading package probability-0.2.1 ... linking ... done.
0

This is better, lets try it again.

> run $ pick $ choose 0.5 0 1
0

And again.

> run $ pick $ choose 0.5 0 1
0

What the hell? We keep tossing a coin and get the same result every time! That’s Haskell in its referentially transparent glory.
To get a sample of tossing several coins we need to pick from their joint distribution.

> run $ pick $ replicateM 10 $ choose 0.5 0 1
[0,0,1,1,0,1,0,1,1,0]

Simulation

In my post on Markov chains I defined a transition function on paths for a simple random walk that goes up or down by one with the same probability.

rwstran :: Tran.T Double [Int]
rwstran x = D.choose 0.5 (x ++ [(last x) + 1]) (x ++ [(last x) - 1])

The probability distribution on paths of legth n starting from state v can be calculated as

rws :: Int -> Int -> D.T Double [Int]
rws v n = (Tran.compose $ replicate n rwstran) [v]

This allows for exact calculations of probability of certain events. The basic problem with modeling Markov chains this way is that even for short chains we need lots of the memory to represent the whole sample space.
For the above random walk we can calculate probability that, say, in the first 10 steps we will not go negative, but trying to do the same for 20 steps took longer than I had patience to wait. Haskell’s probability library offers a solution to that: approximating the real distribution with an empirical one obtained from simulation.
I was a bit confused when I first tried to understand this. To simulate a sequence of i.i.d random variables we need to provide the distribution to sample from. But we can’t just repeatedly sample from the same distribution, as we would get the same number all the time. We would have to first create the joint distribution with replicateM. But that defies the purpose, as such distribution would be too large. The answer that the probability library offers is that the simplest objects it models are not sequences of i.i.d random variables, but homogeneous Markov chains. Such Markov chains are determined by the initial value and a function that assigns a probability distribution to each element of the state space (let’s call it a transition funtion). The i.i.d random variables can also be treated as a special case, but the basic setup is tailored for modeling Markov chains.

The most useful tool for simulation is the ~*. operator provided by the Simulation module. It takes a pair of integers as a left side parameter. The first Int is the number of elements in the sample space we want to use. The second one defines the length of the Markov chain we want to simulate. On the right hand side of the operator we put the transition function. So, if we have defined

emprws :: Int -> Int -> IO (T Double [Int])
emprws n k = Rnd.run $ (n,k) ~*. rwstran $ [0]

Then we can get the empirical distribution on paths of the Markov chain of length 20 simulated 10 times as follows:

> emprws 10 20
fromFreqs [([0,-1,-2,-1,-2,-1,-2,-1,0,-1,0,-1,0,-1,-2,-1,-2,-1,-2,-3,-2],0.1),
([0,-1,0,-1,0,-1,-2,-3,-2,-1,0,-1,-2,-3,-2,-1,-2,-1,0,-1,0],0.1),
([0,-1,0,1,0,-1,-2,-1,0,1,2,1,2,3,4,3,2,3,4,5,4],0.1),
([0,1,0,-1,-2,-3,-4,-5,-6,-7,-6,-7,-8,-9,-10,-9,-10,-11,-10,-11,-12],0.1),
([0,1,0,1,0,1,0,-1,-2,-3,-2,-1,0,1,2,1,2,1,2,1,0],0.1),
([0,1,2,1,0,-1,-2,-1,-2,-1,0,-1,-2,-1,-2,-3,-4,-5,-6,-5,-4],0.1),
([0,1,2,1,2,1,0,1,0,-1,0,-1,0,-1,-2,-3,-4,-5,-6,-5,-6],0.1),
([0,1,2,3,2,1,0,-1,-2,-1,-2,-3,-2,-3,-2,-3,-4,-3,-4,-5,-4],0.1),
([0,1,2,3,2,1,2,3,2,1,0,-1,-2,-3,-2,-3,-2,-3,-4,-5,-4],0.1),
([0,1,2,3,2,3,4,3,2,3,4,5,4,5,6,5,6,5,6,5,6],0.1)]

Note that since we use random number generator, the distribution in the result’s type of the simulation is wrapped in the IO monad. Because of that we need to put fmap before using the event probability operator ?? if we want to ask for probability of some event. For example to find out an approximate probability that a random walk of length 20 starting from 0 never goes negative based on simulating the random walk 2000 times we can do

> fmap ((all (>= 0)) ?? ) $ emprws 2000 20
0.17200000000000013

Even more random simulation

The Simulation module defines two instances of the class that the ~*. operator works for. One instance is for distributions as defined in the Distribution module. Such distributions are essentially lists of pairs (value, probability). We describe the Markov chain we want to simulate by defining a function that assigns a distribution of that type to each value of the state space. This works well for Markov chains in which for each state there are only some small number of states where the chain can go next. However consider a Markov chain where the next value is determined by a continuous distribution, for example when X_{n+1} = X_n + E_n, where \{ E_n\} is a sequence of i.i.d random variables distributed uniformly on the interval (-1,1). To simulate such Markov chains we can use the second instance of the class defined in the Simulation module. In this second instance we use numbers obtained from a random numbers generator instead of the explicit distribution.

Let’s look at an example. First we define a transition function that describes our Markov chain. We want to model paths of the Markov chain, so the function takes a path (a list of floats) and assigns a (random) path that has one more element, obtained from the random numbers generator.

rwrnd :: Rnd.Change [Float]
rwrnd x = fmap (\delta -> x ++ [(last x) + delta]) $ Rnd.randomR (-1,1)

Such transition function can be used with the ~*. operator in the same way as the rws transition function above.

emprwrnd :: Int -> Int ->  IO (T Double[Float])
emprwrnd n k =  Rnd.run $ (n,k) ~*. rwrnd $ [0]

Now let’s simulate this Markov chain, getting 5 paths of length 10 each:

> emprwrnd 5 10
fromFreqs
[([0.0,-0.87644845,-1.8566498,-1.7820039,-1.9403718,-0.9496375,-1.4706907,
-0.8597232,-1.6741004,-2.1475728,-2.224792],0.2),
([0.0,-0.48623613,-2.408719e-,0.60908824,0.9019042,1.3207725,1.9422557,
1.1668808,0.5517659,1.1509926,1.1245335],0.2),
([0.0,-0.18386137,-1.0592186,-0.95641375,-0.9876149,-1.7923257,-1.4315606,
-1.8640705,-2.6031392,-1.7015955,-1.166816],0.2),
([0.0,2.0809833e-,-0.6174592,-0.41014677,0.42637175,0.4807669,1.0540222,
8.9894295e-2,-0.18835253,0.6476095,0.9155247],0.2),
([0.0,0.97053343,6.692678e-,0.55357254,0.7402272,0.40708384,0.19999743,
-0.37751698,0.5084936,0.303145,0.9905505],0.2)]

We can also ask the same type of questions as we did with with the discrete random walk in a similar way. So, what is the probability that in 20 steps we never go negative?

> fmap ((all (>= 0)) ?? ) $ emprwrnd 2000 20
0.12650000000000008
Advertisements

Tags: ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: