36-467/667
29 October 2020 (Lecture 18)
\[ \newcommand{\Expect}[1]{\mathbb{E}\left[ #1 \right]} \newcommand{\Prob}[1]{\mathbb{P}\left[ #1 \right]} \newcommand{\Probwrt}[2]{\mathbb{P}_{#1}\left( #2 \right)} \newcommand{\Var}[1]{\mathrm{Var}\left[ #1 \right]} \newcommand{\Cov}[1]{\mathrm{Cov}\left[ #1 \right]} \newcommand{\Expectwrt}[2]{\mathbb{E}_{#1}\left[ #2 \right]} \newcommand{\InitDist}{\vec{p}_{\mathrm{init}}} \newcommand{\InvDist}{\vec{p}^{*}} \]
\(\Leftrightarrow \mathbf{q}=\left[\begin{array}{cc} 0.5 & 0.5 \\ 0.75 & 0.25 \end{array} \right]\)
rmarkovchain <- function(n, p0, q) {
k <- length(p0)
stopifnot(k==nrow(q),k==ncol(q),all.equal(rowSums(q),rep(1,time=k)))
rinitial <- function() { sample(1:k,size=1,prob=p0) }
rtransition <- function(x) { sample(1:k,size=1,prob=q[x,]) }
return(rmarkov(n,rinitial,rtransition))
}
q <- matrix(c(0.5, 0.5, 0.75, 0.25), byrow=TRUE, nrow=2)
x <- rmarkovchain(1e4,c(0.5,0.5),q)
head(x)
## [1] 1 1 2 2 1 2
ones <- which(x[-1e4]==1) # Why omit the last step?
twos <- which(x[-1e4]==2)
signif(table(x[ones+1])/length(ones),3)
##
## 1 2
## 0.504 0.496
##
## 1 2
## 0.758 0.242
vs. \((0.5,0.5)\) and \((0.75, 0.25)\) ideally
From the Markov property: \[\begin{eqnarray} \Prob{X(1)} & = & \InitDist \mathbf{q}\\ \Prob{X(2)} & = & \Prob{X(1)} \mathbf{q} = \InitDist \mathbf{q}^2\\ \Prob{X(t)} & = & \Prob{X(t-1)} \mathbf{q} = \InitDist \mathbf{q}^{t} \end{eqnarray}\]
The distribution always evolves linearly
All other \(|\lambda_j| < 1\)
Exercise: Show that, as \(t\rightarrow\infty\), \[ \InitDist \mathbf{q}^t \rightarrow \vec{v}_1 \] for all \(\InitDist\) (which are non-negative and sum to 1)
## eigen() decomposition
## $values
## [1] 1.00 -0.25
##
## $vectors
## [,1] [,2]
## [1,] 0.8320503 -0.7071068
## [2,] 0.5547002 0.7071068
## [1] 0.6 0.4
##
## 1 2
## 5983 4017
##
## 1 2
## 6038 3962
##
## 1 2
## 5956 4044
##
## 1 2
## 5987 4013
Basharin, Gely P., Amy N. Langville, and Valeriy A. Naumov. 2004. “The Life and Work of A. A. Markov.” Linear Algebra and Its Applications 386:3–26. http://langvillea.people.cofc.edu/MarkovReprint.pdf.
Cassirer, Ernst. 1944. An Essay on Man: An Introduction to a Philosophy of Human Culutre. New Haven, Connecticut: Yale University Press.
Graham, Loren, and Jean-Michel Kantor. 2009. Naming Infinity: A True Story of Religious Mysticism and Mathematical Creativity. Cambridge, Massachusetts: Harvard University Press.
Grimmett, G. R., and D. R. Stirzaker. 1992. Probability and Random Processes. 2nd ed. Oxford: Oxford University Press.
Guttorp, Peter. 1995. Stochastic Modeling of Scientific Data. London: Chapman; Hall.
Hacking, Ian. 1990. The Taming of Chance. Cambridge, England: Cambridge University Press.
Hoek, John van der, and Robert J. Elliott. 2018. Introduction to Hidden Semi-Markov Models. Cambridge, England: Cambridge University Press.
Lasota, Andrzej, and Michael C. Mackey. 1994. Chaos, Fractals, and Noise: Stochastic Aspects of Dynamics. Berlin: Springer-Verlag.
Mackey, Michael C. 1992. Time’s Arrow: The Origins of Thermodynamic Behavior. Berlin: Springer-Verlag.
Metropolis, Nicholas, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller. 1953. “Equations of State Calculations by Fast Computing Machines.” Journal of Chemical Physics 21:1087–92. https://doi.org/10.1063/1.1699114.
Meyn, S. P., and R. L. Tweedie. 1993. Markov Chains and Stochastic Stability. Berlin: Springer-Verlag.
Porter, Theodore M. 1986. The Rise of Statistical Thinking, 1820–1900. Princeton, New Jersey: Princeton University Press.