📄️ Introduction
Introduction and Definition
📄️ Markov Chain (Definition and Properties)
Let’s understand stochastic processes with the help of some examples.
📄️ Stationary Markov Chains
Note that for the gambler’s ruin problem, $p{i,j}^{n,n+1} = p{i, j}$. This is an example of a case where the one-step transition function is the same for any value of $n$.
📄️ First-Step Analysis
Example
📄️ Generalising First-Step Analysis
In the previous week, we were able to answer specific strategy-related questions about a specific instance of the gambler’s ruin problem. But we don’t want to perform the analysis each time we’re given a different example (i.e., different parameters, or initial conditions): so, can we come up with a more general solution?
📄️ General Solution to Gambler’s Ruin
Until now, we had been using fixed values of initial condition and stopping constraints. But what if we want to solve another similar problem with a different constraint? We would have to solve it all over again (set up the linear system, etc.)
📄️ Random Walk
In the previous page, we’ve derived the results for the general case of the gambler’s ruin problem. But we can go even further.
📄️ Classification of States
Recall that in the beginning, we described 4 types of problems/questions that we’re interested in solving:
📄️ Long Run Performance
Introduction
📄️ Midterm Cheatsheet
Click here to view the pdf of the cheatsheet.
📄️ Generalizing Long Run Performance Beyond Finite MCs
Recall this from the previous page:
📄️ Branching Process
Now that we’ve learnt everything, it’s time to apply it to examples! We’ll be looking at 3 main examples:
📄️ PageRank Algorithm
Let’s continue using stochastic processes as a tool to model and solve real world problems.
📄️ Markov Chain Sampling
Motivation
📄️ Poisson Process
Until now, we have been considering only discrete-time markov chains. Here, we relax that assumption and talk about a continuous-time markov chain.
📄️ Finals Cheatsheet
Click here to view the pdf of the cheatsheet.