Skip to main content

Random Walk

In the previous page, we’ve derived the results for the general case of the gambler’s ruin problem. But we can go even further.

Mathematically, all that matters are the transition probabilities, the states, and the stopping conditions. In particular, the interpretation of each of these depends on the context. So, we can abstract it further away by keeping the underlying mathematics intact. That is, instead of thinking of XnX_n as being the amount of money that a gambler has at time t=nt=n, we can think of XnX_n as being a location. Then, pp can be the probability of going left/right (for the simple 1-dimensional case), NN can be the destination that the person wishes to reach. We can have the location 00 being the “end” (maybe a cliff?) → some kind of absorption state.

Obviously, we haven’t changed the probabilistic structue or “randomness” of the process in any way → the two “processes” (general gambler’s ruin and random walk) are mathematically equivalent.

More formally, consider a stochastic process of a drunk man who takes a random step right with probability pp and left with probability q=1pq=1-p. Note that we don’t have any stopping rule yet. Then, we can write a random walk {Xn}n=0\{X_n\}_{n=0}^\infty as a summation of multiple steps.

In particular, pn,n+1=pp_{n,n+1} = p (probability of taking a step forward and going from XnX_n to Xn+1X_n+1) and pn,n1=qp_{n,n-1}=q (probability of taking a step backward from XnX_n to Xn1X_n-1)

Each step ξi\xi_i can be modelled as a bernoulli random variable (but in this case, we “code” failure to be 1-1 instead of 00 so we need to change it slightly). With some cleverness, we can write it as:

(ξi+1)2i.i.d.Bernoulli(p)\frac{(\xi_i+1)}{2} \mathop{\sim}\limits^{i.i.d.} Bernoulli(p)

Interpretating the above equation: with probabilty pp, we have ξi=2×11=1\xi_i = 2\times 1 -1=1, and with probability q=1pq=1-p, we have: ξi=2×01=1\xi_i = 2\times0-1 = -1 (which is exactly what we want).

Then, we have:

Xn=X0+i=1nξi,n=1,2,,X_n = X_0 + \sum_{i=1}^n \xi_i, \quad n = 1,2,\cdots,

Here, X0X_0 is the starting location.

Observe that even nn is a random variable, i.e, we don’t know when we will stop the random walk. So, it can also be considered to be a “random sum” in this sense.

A special case of this is when X0=0X_0 = 0, then we have:

Xn+n2Binomial(n,p)\frac{X_n+n}{2} \sim Binomial(n,p)

This follows from the fact that i=1nBernoulli(p)=Binomial(n,p)\sum_{i=1}^n Bernoulli(p) = Binomial(n,p) random variable (assuming that all the bernoulil variables are independent and identically distributed).

This version of the random wak is NOT the same as gambler’s ruin because we don’t have any stopping rule yet. But if we go ahead and set a stopping rule such that if XnX_n arrives at NN or 00, then it will stay at the corresponding state forever.

With first step analysis, we have the formulae (same as gambler’s ruin):

P(Xn=0X0=k)={1k/N,p=1/211(q/p)k1(q/p)N,p1/2P(X_n=0|X_0=k) = \begin{cases} \begin{split} 1 - k/N,\quad p&=1/2 \\ 1 - \frac{1-(q/p)^k}{1 - (q/p)^N}, \quad p &\neq 1/2 \end{split} \end{cases}
E[TX0=k]={k(Nk),p=1/21pq(N1(q/p)k1(q/p)Nk),p1/2E[T|X_0=k] = \begin{cases} \begin{split} k(N-k),\quad p&=1/2 \\ \frac{1}{p-q} \left( N\frac{1-(q/p)^k}{1 - (q/p)^N} -k\right),\quad p &\neq 1/2 \end{split} \end{cases}

When p=1/2p=1/2, we call it a “symmetric” random walk.

The above definition is for a 1-dimenstional random walk (the person can only go forward/backward or left/right). Can we extend this to multiple dimensions?

info

Note that we’re still dealing with “discrete” space. It’s not continuous (i.e, you cannot move an arbitrary distance in any dimension, you need to move an integral distance). This is called “discretizing” space.

Yes, it’s not very different. There is nothing “mathematical” about the directions themselves. So, to extend it to dd-dimensions, we can consider the nearest 2d2d neighbors (think: “top/bottom” in each of the 2 dimensions → it’s like making a binary decision for exactly one of the of the dd dimensions). Note that in this case, we’re only allowed to move in one direction in any step. So, we can pick a direction in dd possible ways and then pick either +/- along that direction in 22 ways → hence, there are 2d2d neighbours. It is NOT 2d2^d → this would be the case when we move along one direction in every dimension in a single step.

In case of d=2d=2, the diagram below illustrates which steps are allowed by our definition of random walk:

Untitled

So, for the 2D random walk, any step ξi{(0,1),(1,0),(0,1),(1,0)}\xi_i \in \{(0, 1), (1, 0), (0, -1), (-1, 0) \}.

For a dd-dimension random walk, consider Nd={(x1,,xd):xjN}N^d = \{(x_1,\dots, x_d): x_j \in N \}. It forms a lattice. In the symmetric random walk case, each of the 2d2d nearest neighbors has probability 1/2d1/2d. We can denote this step as ξi\xi_i. (You can think of each step as being a “delta” tuple with d1d-1 zeroes and exactly 11 non-zero value which can be ±1\pm1. Example: In 3-dimensions, ξ1=(1,0,0)\xi_1 =(1, 0,0) means that you move right along the xx-axis. ξ1=(0,0,1)\xi_1 = (0,0,1) means you move up along the zz-axis. Note that (1,0,1)(1,0,1) is NOT allowed in a single step → this explains why you have 2d2d instead of 2d2^d neighbours.)

Then we have (in the same way as for a single-dimension random walk):

Xn=X0+i=1nξn,n=1,2,X_n = X_0 + \sum_{i=1}^n \xi_n, \quad n =1,2,\cdots

In fact, the probabilities need not be 1/2d1/2d → they can be anything, as long as they sum to 11.

For such a random walk, we can also find:

  • Stopping time: when XnX_n arrives at a point S=(s1,,sd)S=(s_1, \dots, s_d), the walker will stop there.
  • the probability that XnX_n can arrive at SS

using first step analysis.

A famous result is:

A drunk man will find his home, but a drunk bird may get lost forever.

It comes from the result that

  • For 2-dimension symmetric random walk, the probability that the walker can arrive at SS (maybe his home?) is 11 (assuming an infnite 2D plane, the walker almost surely reaches his home → read this for a more mathematical definition of “almost surely” → it’s super interesting 🎊)
  • For any dimension > 2, the probability that the walker arrives at SS is < 1.

It’s a good exercise to try and prove the above result.

We can generalize the random walk in another way, by relaxing the “discrete” constraint, to the continuous case (can be 1D, 2D, or dd-dimensions too).

For each step, the walker moves x\triangle x in t\triangle t time. When x0\triangle x \to 0 and t0\triangle t \to 0, then it can describe a continuous path (we’re considering smaller and smaller intervals of time, and we expect the particle to move a very small amount in that small interval of time).

Actually when t=cx2\triangle t = c \triangle x^2 (according to statistical mechanics), the resulting process is called Brownian motion. But this is outside the scope of the class.