- Introduction to Python
- Getting started with Python and the IPython notebook
- Functions are first class objects
- Data science is OSEMN
- Working with text
- Preprocessing text data
- Working with structured data
- Using SQLite3
- Using HDF5
- Using numpy
- Using Pandas
- Computational problems in statistics
- Computer numbers and mathematics
- Algorithmic complexity
- Linear Algebra and Linear Systems
- Linear Algebra and Matrix Decompositions
- Change of Basis
- Optimization and Non-linear Methods
- Practical Optimizatio Routines
- Finding roots
- Optimization Primer
- Using scipy.optimize
- Gradient deescent
- Newton’s method and variants
- Constrained optimization
- Curve fitting
- Finding paraemeters for ODE models
- Optimization of graph node placement
- Optimization of standard statistical models
- Fitting ODEs with the Levenberg–Marquardt algorithm
- 1D example
- 2D example
- Algorithms for Optimization and Root Finding for Multivariate Problems
- Expectation Maximizatio (EM) Algorithm
- Monte Carlo Methods
- Resampling methods
- Resampling
- Simulations
- Setting the random seed
- Sampling with and without replacement
- Calculation of Cook’s distance
- Permutation resampling
- Design of simulation experiments
- Example: Simulations to estimate power
- Check with R
- Estimating the CDF
- Estimating the PDF
- Kernel density estimation
- Multivariate kerndel density estimation
- Markov Chain Monte Carlo (MCMC)
- Using PyMC2
- Using PyMC3
- Using PyStan
- C Crash Course
- Code Optimization
- Using C code in Python
- Using functions from various compiled languages in Python
- Julia and Python
- Converting Python Code to C for speed
- Optimization bake-off
- Writing Parallel Code
- Massively parallel programming with GPUs
- Writing CUDA in C
- Distributed computing for Big Data
- Hadoop MapReduce on AWS EMR with mrjob
- Spark on a local mahcine using 4 nodes
- Modules and Packaging
- Tour of the Jupyter (IPython3) notebook
- Polyglot programming
- What you should know and learn more about
- Wrapping R libraries with Rpy
Metropolis-Hastings sampler
This lecture will only cover the basic ideas of MCMC and the 3 common veriants - Metropolis-Hastings, Gibbs and slice sampling. All ocde will be built from the ground up to ilustrate what is involved in fitting an MCMC model, but only toy examples will be shown since the goal is conceptual understanding. More realiztic computational examples will be shown in the next lecture using the pymc
and pystan
packages.
In Bayesian statistics, we want to estiamte the posterior distribution, but this is often intractable due to the high-dimensional integral in the denominator (marginal likelihood). A few other ideas we have encountered that are also relevant here are Monte Carlo integration with inddependent samples and the use of proposal distributions (e.g. rejection and importance sampling). As we have seen from the Monte Carlo inttegration lectures, we can approximate the posterior \(p(\theta | X)\) if we can somehow draw many samples that come from the posterior distribution. With vanilla Monte Carlo integration, we need the samples to be independent draws from the posterior distribution, which is a problem if we do not actually know what the posterior distribution is (because we cannot integrte the marginal likelihood).
With MCMC, we draw samples from a (simple) proposal distribution so that each draw depends only on the state of the previous draw (i.e. the samples form a Markov chain). Under certain condiitons, the Markov chain will have a unique stationary distribution. In addition, not all samples are used - instead we set up acceptance criteria for each draw based on comparing successive states with respect to a target distribution that enusre that the stationary distribution is the posterior distribution of interest. The nice thing is that this target distribution only needs to be proportional to the posterior distribution, which means we don’t need to evaluate the potentially intractable marginal likelihood, which is just a normalizing constant. We can find such a target distribution easily, since posterior
\(\propto\) likelihood
\(\times\) prior
. After some time, the Markov chain of accepted draws will converge to the staionary distribution, and we can use those samples as (correlated) draws from the posterior distribution, and find functions of the posterior distribution in the same way as for vanilla Monte Carlo integration.
There are several flavors of MCMC, but the simplest to understand is the Metropolis-Hastings random walk algorithm, and we will start there.
To carry out the Metropolis-Hastings algorithm, we need to draw random samples from the folllowing distributions
- the standard uniform distribution
- a proposal distriution \(p(x)\) that we choose to be \(\mathcal{N}(0, \sigma)\)
- the target distribution \(g(x)\) which is proportional to the posterior probability
Given an initial guess for \(\theta\) with positive probability of being drawn, the Metropolis-Hastings algorithm proceeds as follows
- Choose a new proposed value (\(\theta_p\)) such that \(\theta_p = \theta + \Delta\theta\) where \(\Delta \theta \sim \mathcal{N}(0, \sigma)\)
- Caluculate the ratio \[\rho = \frac{g(\theta_p \ | \ X)}{g(\theta \ | \ X)}\]
where \(g\) is the posterior probability.
- If the proposal distribution is not symmetrical, we need to weight the accceptanc probablity to maintain detailed balance (reversibilty) of the stationary distribution, and insetad calculate \[\rho = \frac{g(\theta_p \ | \ X) p(\theta \ | \ \theta_p)}{g(\theta \ | \ X) p(\theta_p \ | \ \theta)}\]
Since we are taking ratios, the denominator cancels any distribution proporational to \(g\) will also work - so we can use
\[\rho = \frac{p(X | \theta_p ) p(\theta_p)}{p(X | \theta ) p(\theta)}\] - If \(\rho \ge 1\), then set \(\theta = \theta_p\)
- If \(\rho < 1\), then set \(\theta = \theta_p\) with probability \(\rho\), otherwise set \(\theta = \theta\) (this is where we use the standard uniform distribution)
- Repeat the earlier steps
After some number of iterations \(k\), the samples \(\theta_{k+1}, \theta_{k+2}, \dots\) will be samples from the posterior distributions. Here are initial concepts to help your intuition about why this is so:
- We accept a proposed move to \(\theta_{k+1}\) whenever the density of the (unnormalzied) target distribution at \(\theta_{k+1}\) is larger than the value of \(\theta_k\) - so \(\theta\) will more often be found in places where the target distribution is denser
- If this was all we accepted, \(\theta\) would get stuck at a local mode of the target distribution, so we also accept occasional moves to lower density regions - it turns out that the correct probability of doing so is given by the ratio \(\rho\)
- The acceptance criteria only looks at ratios of the target distribution, so the denominator cancels out and does not matter - that is why we only need samples from a distribution proprotional to the posterior distribution
- So, \(\theta\) will be expected to bounce around in such a way that its spends its time in places proportional to the density of the posterior distribution - that is, \(\theta\) is a draw from the posterior distribution.
Additional notes:
Different propsoal distributions can be used for Metropolis-Hastings:
- The independence sampler uses a proposal distribtuion that is independent of the current value of \(\theta\). In this case the propsoal distribution needs to be similar to the posterior distirbution for efficincy, while ensuring that the acceptance ratio is bounded in the tail region of the posterior.
- The random walk sampler (used in this example) takes a random step centered at the current value of \(\theta\) - efficiecny is a trade-off between small step size with high probability of acceptance and large step sizes with low probaiity of acceptance. Note (picture will be sketched in class) that the random walk may take a long time to traverse narrow regions of the probabilty distribution. Changing the step size (e.g. scaling \(\Sigma\) for a multivariate normal proposal distribution) so that a target proportion of proposlas are accepted is known as tuning.
- Much research is being conducted on different proposal distributions for efficient sampling of the posterior distribution.
We will first see a numerical example and then try to understand why it works.
def target(lik, prior, n, h, theta): if theta < 0 or theta > 1: return 0 else: return lik(n, theta).pmf(h)*prior.pdf(theta) n = 100 h = 61 a = 10 b = 10 lik = st.binom prior = st.beta(a, b) sigma = 0.3 naccept = 0 theta = 0.1 niters = 10000 samples = np.zeros(niters+1) samples[0] = theta for i in range(niters): theta_p = theta + st.norm(0, sigma).rvs() rho = min(1, target(lik, prior, n, h, theta_p)/target(lik, prior, n, h, theta )) u = np.random.uniform() if u < rho: naccept += 1 theta = theta_p samples[i+1] = theta nmcmc = len(samples)//2 print "Efficiency = ", naccept/niters
Efficiency = 0.19
post = st.beta(h+a, n-h+b) plt.figure(figsize=(12, 9)) plt.hist(samples[nmcmc:], 40, histtype='step', normed=True, linewidth=1, label='Distribution of prior samples'); plt.hist(prior.rvs(nmcmc), 40, histtype='step', normed=True, linewidth=1, label='Distribution of posterior samples'); plt.plot(thetas, post.pdf(thetas), c='red', linestyle='--', alpha=0.5, label='True posterior') plt.xlim([0,1]); plt.legend(loc='best');
Trace plots are often used to informally assess for stochastic convergence. Rigorous demonstration of convergence is an unsolved problem, but simple ideas such as running mutliple chains and checking that they are converging to similar distribtions are often employed in practice.
def mh_coin(niters, n, h, theta, lik, prior, sigma): samples = [theta] while len(samples) < niters: theta_p = theta + st.norm(0, sigma).rvs() rho = min(1, target(lik, prior, n, h, theta_p)/target(lik, prior, n, h, theta )) u = np.random.uniform() if u < rho: theta = theta_p samples.append(theta) return samples
n = 100 h = 61 lik = st.binom prior = st.beta(a, b) sigma = 0.05 niters = 100 sampless = [mh_coin(niters, n, h, theta, lik, prior, sigma) for theta in np.arange(0.1, 1, 0.2)]
# Convergence of multiple chains for samples in sampless: plt.plot(samples, '-o') plt.xlim([0, niters]) plt.ylim([0, 1]);
There are two main ideas - first that the samples generated by MCMC constitute a Markov chain, and that this Markov chain has a unique stationary distribution that is always reached if we geenrate a very large number of samples. The seocnd idea is to show that this stationary distribution is exactly the posterior distribution that we are looking for. We will only give the intuition here as a refreseher.
Since possible transitions depend only on the current and the proposed values of \(\theta\), the successive values of \(\theta\) in a Metropolis-Hastings sample consittute a Markov chain. Recall that for a Markov chain with a transition matrix \(P\)
\[\pi = \pi P\]
means that \(\pi\) is a stationary distribution. If it is posssible to go from any state to any other state, then the matrix is irreducible. If in addtition, it is not possible to get stuck in an oscillation, then the matrix is also aperiodic or mixing. For finite state spaces, irreducibility and aperiodicity guarantee the existence of a unique stationary state. For continuous state space, we need an additional property of positive recurrence - starting from any state, the expected time to come back to the original state must be finitte. If we have all 3 peroperties of irreducibility, aperiodicity and positive recurrence, then there is a unique stationary distribution. The term ergodic is a little confusiong - most statndard definitinos take ergodicity to be equivalent to irreducibiltiy, but often Bayesian texts take ergoicity to mean irreducibility, aperiodicity and positive recurrence, and we wil follow the latter convention. For another intuitive perspective, the random walk Metropolish-Hasting algorithm is analogous to a diffusion process. Since all states are commmuicating (by design), eventually the system will settle into an equilibrium state. This is analaogous to converging on the stationary state.
We will considr the simplest possible scenario for an explicit calculation. Suppose we have a two-state system where the posterior probabilities are \(\theta\) and \(1 - \theta\). Suppose \(\theta < 0.5\). So we have the following picture with the Metropolish-Hastings algorithm: and we find the stationary distribution \(\pi = \left( \begin{array}{cc} p & 1-p \end{array} \right)\) by solving
to be \(\pi = \left( \begin{array}{cc} \theta & 1-\theta \end{array} \right)\), which is the posterior distribtion.
The final point is that a stationary distribution has to follow the detailed balance (reversibitily) criterion that says that the probability of being in state \(x\) and moving to state \(y\) must be the same as the probability of being in state \(y\) and moving to state \(x\). Or, more briefly,
\[\pi(x)P(x \to y) = \pi(y)P(y \to x)\]
and the need to make sure that this condition is true accounts for the strange looking acceptance criterion
\[\min \left(1, \frac{g(\theta_p \ | \ X) p(\theta \ | \ \theta_p)}{g(\theta \ | \ X) p(\theta_p \ | \ \theta)} \right)\]
Intuition
We want the stationary distribution \(\pi(x)\) to be the posterior distribution \(P(x)\). So we set
\[P(x)P(x \to y) = P(y)P(y \to x)\]
Rearranging, we get
\[\frac{P(x \to y)}{P(y \to x)} = \frac{P(y)}{P(x)}\]
We split the transition probability into separate proposal \(q\) and acceptance \(A\) parts, and after a little algebraic rearrangement get
\[\frac{A(x \to y)}{A(y \to x)} = \frac{P(y) \, q(y \to x)}{P(x) \, q(x \to y)}\]
An acceptance probability that meets this conidtion is
\[A(x \to y) = \min \left(1, \frac{P(y) \, q(y \to x)}{P(x) \, q(x \to y)} \right)\]
since \(A\) in the numerator and denominator are both bounded above by 1.
See http://www.cs.indiana.edu/~hauserk/downloads/MetropolisExplanation.pdf for algebraic details.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论