class: center, middle, inverse, title-slide # Sequential Monte Carlo in Statistics ## July 2019 ### Anthony Lee
University of Bristol & Alan Turing Institute
With many thanks to the Statistical Society of Australia and ACEMS --- # Outline - Introduction: Classical Monte Carlo & Importance Sampling - Sequential Monte Carlo & hidden Markov models - Arbitrary target distributions - Twisted flows - Variance estimation <br/> For more details, and references, please see the recent survey Doucet & Lee. Sequential Monte Carlo methods. In Handbook of Graphical Models, 2018. A preprint is linked to from my website. --- class: inverse, center, middle # Introduction ## Classical Monte Carlo & Importance Sampling --- # Classical Monte Carlo: integral Main idea: approximate sums/integrals with random variables. Notation: measurable space `\((\mathsf{X}, \mathcal{X})\)`, `\(f:\mathsf{X}\rightarrow\mathbb{R}\)` and `\(\mu\)` a measure on `\(\mathcal{X}\)`, `$$\mu(f):=\int_{\mathsf{X}}f(x)\mu({\rm d}x).$$` -- For example, consider - `\(\mathsf{X} = \{1,\ldots,s\}\)` so `\(\mu(f)\)` is a sum, - `\(\mathsf{X} = \mathbb{N}\)` so `\(\mu(f)\)` is an infinite sum, - `\(\mu\)` having a density w.r.t. Lebesgue measure on `\(\mathbb{R}^d\)`. -- If `\(\mu\)` is a probability measure / distribution, `\(\mu(1)=1\)` and `$$\mu(f)=\mathbb{E}\left[f(X)\right],\qquad X\sim\mu.$$` --- # Classical Monte Carlo: approximation Assume `\(\mu\)` is a probability measure / distribution. Monte Carlo (particle) approximation of `\(\mu\)` is a discrete probability measure `$$\mu^{N} := \frac{1}{N} \sum_{i=1}^N \delta_{\zeta_i},\qquad\zeta_{i}\overset{\rm iid}{\sim}\mu$$` -- So the Monte Carlo approximation of `\(\mu(f)\)` is the integral of `\(f\)` w.r.t. `\(\mu^N\)` `$$\mu^{N}(f):=\frac{1}{N}\sum_{i=1}^{N}f(\zeta_{i}).$$` -- **Law of large numbers**: `\(\mu^{N}(f)\overset{\rm a.s.}{\rightarrow}\mu(f)\)` as `\(N\rightarrow\infty\)`. Fundamental justification, but no handle on accuracy for finite `\(N\)`. --- # Classical Monte Carlo: accuracy Assume `\(\mu(f^2) < \infty\)`, i.e `\(f \in L_2(\mathsf{X}, \mu)\)`. The variance of `\(\mu^N(f)\)` is `$${\rm var}(\mu^N(f)) = \frac{1}{N} {\rm var}(f(X)) = \frac{1}{N}\mu(\bar{f}^2),$$` where `\(\bar{f} := f - \mu(f)\)`. -- **Central Limit Theorem**: as `\(N \to \infty\)` we have asymptotic normality `$$N^{1/2} ( \mu^N(f) - \mu(f) ) \overset{L}{\to} N(0, \mu(\bar{f}^2)).$$` -- Good news: `\(|\mu^N(f) - \mu(f)|\)` is `\(\mathcal{O}_P(N^{-1/2})\)`. Bad news: `\(\mu(\bar{f}^2)\)` may be very large, e.g. when `\(\mathsf{X}\)` is "large". --- # Importance sampling Say we want to approximate `\(\pi(f)\)` but we can only simulate according to `\(\mu\)`? -- If `\(\mu\)` dominates `\(\pi\)`, i.e. `\(\pi(x) > 0 \Rightarrow \mu(x) > 0\)`, then we can write `$$\pi(f) = \mu(w \cdot f) = \int f(x) w(x) \mu({\rm d}x),$$` where `\(w = {\rm d} \pi / {\rm d} \mu\)` is the ratio of densities of `\(\pi\)` and `\(\mu\)`. -- So we can approximate `\(\pi(f)\)` by `\(\mu^N(w \cdot f)\)`. This is an unbiased approximation with variance `$${\rm var}(\mu^N(w \cdot f)) = \frac{\mu(w^2 \cdot f^2) - \pi(f)^2}{N} = \frac{\pi(w \cdot f^2) - \pi(f)^2}{N}.$$` --- # Self-normalized importance sampling We can also "self-normalize" the approximation, i.e. compute `$$\pi^N_{\rm SN}(f) = \frac{\mu^N(w \cdot f)}{\mu^N(w)} = \frac{\sum_{i=1}^N w(X_i)f(X_i)}{\sum_{i=1}^N w(X_i)} = \frac{\mu^N \cdot w}{\mu^N(w)} (f).$$` Useful when `\(w\)` can be computed only up to an unknown normalizing constant, e.g. when `\(\pi\)` is a posterior distribution. -- This is biased but strongly consistent and asymptotically normal as `\(N \to \infty\)`. The asymptotic variance is `$$\lim_{N \to \infty} N ~ {\rm var}(\pi^N_{\rm SN}(f) ) = \pi(w \cdot \bar{f}^2),$$` where `\(\bar{f} = f - \pi(f)\)`. The asymptotic variance can be smaller than the asymptotic variance for IS. --- class: inverse, center, middle # Sequential Monte Carlo ## Motivated by Hidden Markov Models --- # Hidden Markov model Let `\((X_1,\ldots,X_n)\)` be a Markov chain with transition kernels `\(M_2,\ldots,M_n\)` and initial distribution `\(\mu\)`. So `\(X_1 \sim \mu\)` and `\(X_p \mid X_{p-1} \sim M_p(X_{p-1}, \cdot)\)`. Let `\(Y_p \mid (X_1,\ldots,X_n) \sim g(X_p,\cdot)\)`. In an HMM, the observations are `\(Y_1,\ldots,Y_n\)`. <center> <p style="margin-bottom:-3cm">
</p> </center> --- # HMM examples - Economics: asset prices driven by a latent process - Chemistry: reactions driven by concentrations of chemicals - Physics: imperfect measurements of a dynamical system - Robotics: localization in space through noisy observations - Ecology: sparse observations of animals in space and time - Environment: noisy/partial observations of climate <br/> - Essentially any discretely and partially observed Markov process. --- # Objects of interest For `\(p \in \{1,\ldots,n\}\)`, we are interested in, 1. `\(\eta_p\)` : the distribution of `\(X_p\)` given `\(Y_1,\ldots,Y_{p-1}\)`. [ Note `\(\eta_1 \equiv \mu\)` ]. 1. `\(\hat{\eta}_p\)` : the distribution of `\(X_p\)` given `\(Y_1,\ldots,Y_{p}\)`. 1. `\(Z_p\)` : the marginal likelihood associated with `\(y_1,\ldots,y_p\)`. -- For example, we might have densities in common notation `$$\begin{align} \eta_p(x_p) &= f(x_p \mid y_1,\ldots,y_{p-1}) \\ \hat{\eta}_p(x_p) &= f(x_p \mid y_1,\ldots,y_p) \\ Z_p &= f(y_1,\ldots,y_p) \end{align}$$` -- To simplify notation in what follows, we define `$$G_p(x) := g(x, y_p),$$` rendering the dependence on `\(y_1,\ldots,y_n\)` implicit. --- # Recursive updates With densities, we have `\(\eta_1(x_1) = \mu(x_1) = f(x_1)\)`, and `$$\hat{\eta}_1(x_1) = f(x_1 \mid y_1) = \frac{f(x_1) f(y_1 \mid x_1)}{f(y_1) } = \frac{\eta_1 \cdot G_1}{\eta_1(G_1)}(x_1).$$` Similarly, for `\(p \in \{2,\ldots,n\}\)`, `$$\eta_p(x_p) = f(x_p \mid y_{1:p-1}) = \int f(x_{p-1} \mid y_{1:p-1}) f(x_p \mid x_{p-1}) {\rm d}x_{p-1} = \hat{\eta}_{p-1}M_p(x_p),$$` and `$$\hat{\eta}_p(x_p) = \frac{f(x_p \mid y_{1:p-1}) f(y_p \mid x_p)}{\int f(y_p \mid x_p')f(x_p' \mid y_{1:p-1}) {\rm d}x_p' } = \frac{\eta_p \cdot G_p}{\eta_p(G_p)}(x_p).$$` We also have `$$Z_p = f(y_{1:p}) = f(y_{1:p-1}) f(y_p \mid y_{1:p-1}) = Z_{p-1} \eta_p(G_p).$$` --- # Forward algorithm (finite state space) **Algorithm**: 1. Set `$$\eta_1 = \mu, \qquad Z_1 = \eta_1(G_1), \qquad \text{ and } \qquad \hat{\eta}_1 = \frac{\eta_1 \cdot G_1}{\eta_1(G_1)}.$$` 2. For `\(p = 2,\ldots,n\)`: set `$$\eta_p = \hat{\eta}_{p-1} M_p, \qquad Z_p = Z_{p-1} \eta_p(G_p), \qquad \text{ and } \qquad \hat{\eta}_p = \frac{\eta_p \cdot G_p}{\eta_p(G_p)}.$$` -- <br/><br/> In SMC we replace intractable objects with particle approximations. --- # SMC (general state space) Sample `\(\zeta_1^i \overset{\rm iid}{\sim} \mu = \eta_1\)` for `\(i \in \{1,\ldots,N\}\)`. Set `$$\eta_1^N = \frac{1}{N} \sum_{i=1}^N \delta_{\zeta_1^i}, \qquad Z_1^N = \eta_1^N(G_1), \qquad \hat{\eta}^N_1 = \frac{\eta_1^N \cdot G_1}{\eta_1^N(G_1)}.$$` For `\(p = 2,\ldots,n\)`: sample `$$\zeta_p^i \overset{\rm iid}{\sim} \hat{\eta}^N_{p-1}M_p = \frac{\sum_{j=1}^N G_{p-1}(\zeta_{p-1}^j)M_p(\zeta_{p-1}^j, \cdot)}{\sum_{j=1}^N G_{p-1}(\zeta_{p-1}^j)}, \qquad i \in \{1,\ldots,N\}.$$` and then set `$$\eta_p^N = \frac{1}{N} \sum_{i=1}^N \delta_{\zeta_p^i}, \qquad Z_p^N = Z_{p-1}^N \eta_p^N(G_p), \qquad \hat{\eta}^N_p = \frac{\eta_p^N \cdot G_p}{\eta_p^N(G_p)}.$$` --- # Evolutionary algorithm interpretation Sampling `$$\zeta_p^i \overset{\rm iid}{\sim} \hat{\eta}^N_{p-1}M_p = \frac{\sum_{j=1}^N G_{p-1}(\zeta_{p-1}^j)M_p(\zeta_{p-1}^j, \cdot)}{\sum_{j=1}^N G_{p-1}(\zeta_{p-1}^j)}, \qquad i \in \{1,\ldots,N\},$$` can be viewed as a two-stage procedure. -- For each `\(i \in \{1,\ldots,N\}\)`, 1. Sample `\(A_{p-1}^i \sim {\rm Categorical}(G_{p-1}(\zeta_{p-1}^1),\ldots,G_{p-1}(\zeta_{p-1}^N))\)`. 2. Sample `\(\zeta_p^i \sim M_p(\zeta_{p-1}^{A_{p-1}^i}, \cdot)\)`. --- 1. Selection: `\(G_{p-1}\)` is like a fitness function. 2. Mutation: `\(M_p\)` is a mutation kernel. --- # R code (univariate states) ```r smc <- function(mu, M, G, n, N) { zetas <- matrix(0,n,N) as <- matrix(0,n-1,N) gs <- matrix(0,n,N) log.Zs <- rep(0,n) zetas[1,] <- mu(N) gs[1,] <- G(1, zetas[1,]) log.Zs[1] <- log(mean(gs[1,])) for (p in 2:n) { # simulate ancestor indices, then particles as[p-1,] <- sample(N,N,replace=TRUE,prob=gs[p-1,]/sum(gs[p-1,])) zetas[p,] <- M(p, zetas[p-1,as[p-1,]]) gs[p,] <- G(p, zetas[p,]) log.Zs[p] <- log.Zs[p-1] + log(mean(gs[p,])) } return(list(zetas=zetas,gs=gs,as=as,log.Zs=log.Zs)) } ``` --- # Evolution of the particle system
--- # A little bit of theory For simplicity, assume `\(G_1,\ldots,G_n\)` and `\(f\)` are bounded. **Lack of bias of `\(Z_n^N\)`**: `\(\mathbb{E}[Z_n^N] = Z_n\)`. **Law of large numbers**: `\(Z_n^N \overset{\rm a.s.}{\to} Z_n\)` and `\(\eta_n^N(f) \overset{\rm a.s.}{\to} \eta_n(f)\)` as `\(N \to \infty\)`. **Asymptotic normality**: `\(N^{1/2}(Z_n^N - Z_n)\)` and `\(N^{1/2}(\eta_n^N(f) - \eta_n(f))\)` converge in distribution to (different) normal random variables. **Time uniform convergence**: Under special assumptions one has `$$\sup_{n \geq 1} \mathbb{E}\left [ | \eta_n^N(f) - \eta_n(f) |^2 \right ]^{1/2} \leq \frac{E(f)}{\sqrt{N}}.$$` **Stability of `\(Z_n^N\)` with `\(N \propto n\)`**: Under similar assumptions, `$${\rm var}\left (\frac{Z_n^N}{Z_n} \right ) \leq (1+C/N)^n.$$` --- # Extensions to paths One can also define the measure `$$\bar{\gamma}_n({\rm d}x_1,\ldots,{\rm d}x_n) = \mu({\rm d}x_1)\prod_{p=2}^n G_{p-1}(x_{p-1})M_p(x_{p-1}, {\rm d}x_p).$$` In an HMM: this has density `\(f(x_{1:n}, y_{1:n-1}) \propto f(x_{1:n} \mid y_{1:n-1})\)`. -- By tracing an ancestral line from each of the time `\(n\)` particles, one can define a particle (path) approximation of `\(\bar{\gamma}_n\)`. `$$\bar{\gamma}_n^N(f) = Z_{n-1}^N \frac{1}{N} \sum_{i=1}^N f(\zeta_{1:n}^{\text{path }i}),$$` where the `\(i\)`th path arises by tracing the ancestors of `\(\zeta_n^i\)`. These approximations are unbiased and consistent. A corresponding `\(\bar{\eta}_n^N(f)\)` is obtained by omitting the `\(Z_{n-1}^N\)` term, which is biased but consistent. --- # Ancestral lineages, `\(N=16\)` particles ![](smc-tutorial_files/figure-html/unnamed-chunk-4-1.png)<!-- --> --- # Ancestral lineages, `\(N=256\)` particles ![](smc-tutorial_files/figure-html/unnamed-chunk-5-1.png)<!-- --> --- class: inverse, center, middle # Arbitrary target distributions --- # Flow of distributions The algorithm does not necessarily require a HMM interpretation. One defines `\(\eta_1 = \mu\)` and for `\(p=2,\ldots,n\)` `$$\eta_p = \frac{\eta_{p-1}\cdot G_{p-1}}{\eta_{p-1}(G_{p-1})} M_p = \Phi_p(\eta_{p-1}).$$` -- <br/> <br/> <br/> In order to use SMC for approximating `\(\pi(f)\)` for an arbitrary `\(\pi\)` we just need to construct an appropriate flow from `\(\mu\)` to `\(\pi\)`, where `\(\mu\)` is easy to sample from. --- # An SMC sampler Define `\(\mu = \eta_1 \gg \ldots \gg \eta_n = \pi\)` directly. Then let `$$G_{p-1} \propto \frac{{\rm d}\eta_p}{{\rm d}\eta_{p-1}},$$` and each `\(M_p\)` be a `\(\eta_p\)`-invariant Markov kernel, i.e. `\(\eta_p M_p = \eta_p\)`. -- We can then verify that for `\(p=2,\ldots,n\)`, `$$\frac{\eta_{p-1} \cdot G_{p-1}}{\eta_{p-1}(G_{p-1})}M_p = \eta_p M_p = \eta_p,$$` so the flow is valid. -- In practice, one might choose some `\(\mu \gg \pi\)` and let `$$\eta_p(x) \propto \mu(x)^{1-\beta_p} \pi(x)^{\beta_p},$$` for some `\(0=\beta_1 < \ldots < \beta_n = 1\)`. --- class: inverse, center, middle # Twisted flows --- # Twisting flows A "good" flow has small associated variances. Once you have a flow, you can also *twist* it. A given `\(\mu\)`, `\(M_1,\ldots,M_n\)`, `\(G_1,\ldots,G_n\)`, can be twisted. Notation: `\(M(f)(x) = \int f(x')M(x,{\rm d}x')\)`. -- For some positive functions `\(\psi_1,\ldots,\psi_{n}\)` define `$$\mu^\psi = \frac{\mu \cdot \psi_1}{\mu(\psi_1)}, \qquad G_1^\psi(x) := \frac{G_1(x)M_2(\psi_2)(x)}{\psi_1(x)} \mu(\psi_1),$$` and for `\(p \in \{2,\ldots,n\}\)` `$$M_p^\psi(x, {\rm d}x'):=\frac{M_p(x, {\rm d}x') \cdot \psi_p(x')}{M_p(\psi_p)(x)},\qquad G_p^\psi(x) = \frac{G_p(x)M_{p+1}(\psi_{p+1})(x)}{\psi_p(x)}.$$` Calculations give `\(\eta_p^\psi \propto \eta_p \cdot \psi_p\)` for each `\(p\)`. --- # Optimal twisting functions To obtain a zero variance approximation of `\(Z_n\)`, one should use `$$\psi_p(x) = G_p(x) M_{p+1}(\psi_{p+1})(x).$$` -- In the HMM context, this corresponds to `$$\psi_p(x_p) = f(y_{p:n} \mid x_p).$$` Intuition: particles are weighted by how they explain the "future" as well as the present. -- The optimal `\(\psi\)` functions are typically intractable. But one can use any reasonable approximation. Example: for `\(\psi_p = G_p\)`, one obtains the fully-adapted auxiliary particle filter. Caution: the intermediate distributions are changed by twisting. --- class: inverse, center, middle # Variance estimation --- # How accurate are our SMC approximations? There is substantial theory on how variable SMC approximations are. Especially for the unbiased, "unnormalized" approximations. -- In practice this is not *that* helpful for assessing approximation quality. A simple estimate of the variance can be phrased in terms of the time `\(n\)` particles and the *Eve* indices of those particles: i.e. the index of their time `\(1\)` ancestors, `\(E_n^1,\ldots,E_n^N\)`. ![](smc-tutorial_files/figure-html/unnamed-chunk-6-1.png)<!-- --> --- # Estimate of variance of `\(Z_{n-1}^N\)` Let `\(E_n^j\)` denote the index of the ancestor of `\(\zeta_n^j\)`. Then an estimate of `\({\rm var}(Z_{n-1}^N/Z_{n-1})\)` is `$$V_n^N = 1 - \left (\frac{N}{N-1} \right)^n \left( 1 - \frac{1}{N^2} \sum_{i=1}^N | \{j : E_n^j = i \} |^2 \right)$$` -- If for some `\(k\)`, `\(E_n^1 = \cdots = E_n^N = k\)` then `\(V_n^N = 1\)`. We have lack-of-bias in the sense that `$$\mathbb{E}[(Z_{n-1}^N)^2 V_n^N] = {\rm var}(Z_{n-1}^N).$$` We have consistency in the sense that `$$N V_n^N \overset{P}{\to} \lim_{N \to \infty} N {\rm var}(Z_{n-1}^N/Z_{n-1}).$$` --- # R code ```r VnN <- function(as) { n <- dim(as)[1] + 1 N <- dim(as)[2] eves <- 1:N # recursively update the Eve indices for (p in 1:(n-1)) { eves <- eves[as[p,]] } 1 - (N/(N-1))^n*(1 - sum(table(eves)^2)/N^2) } ``` The estimate depends only on the Eve indices, which depend only on the ancestors. Approximating the variance of `\(\eta_n^N(f)\)` or `\(Z^N_{n-1}\eta_n^N(f)\)` is only a little bit more complicate, and does depend on particle values. --- ```r lZ <- -12.4395996645203368302645685616880655 trials <- 1e3 lZs <- rep(0, trials) Vs <- rep(0, trials) for (i in 1:trials) { out <- smc(mu, M, G, 10, 128) lZs[i] <- out$log.Zs[9] Vs[i] <- VnN(out$as) } mean(exp(lZs-lZ)) # lack-of-bias: should be close to 1 ``` ``` ## [1] 0.9941002 ``` ```r var(exp(lZs-lZ)) # sample relative variance of Zs ``` ``` ## [1] 0.02712202 ``` ```r mean(exp(2*(lZs-lZ))*Vs) # unbiased estimate of relative variance ``` ``` ## [1] 0.02755751 ``` ```r mean(Vs) # biased estimate of relative variance ``` ``` ## [1] 0.02746865 ``` --- ![](smc-tutorial_files/figure-html/unnamed-chunk-9-1.png)<!-- --> ``` ## [1] "VnN(out$as) = 0.032616368188082" ``` --- class: inverse, center, middle # Remarks --- # Remarks SMC is useful for approximating integrals with a particular structure. It can be very computationally efficient in suitable scenarios, e.g. time-uniform convergence. One can also transform complex, high-dimensional integrals into integrals with this structure. Not covered: - Particle MCMC: use of SMC within MCMC, e.g. to infer parameters. - Specific smoothing algorithms. - Refined theoretical results. --- class: center, middle # Thanks!