Training a cosmological likelihood emulator …

I had a read through this recent arXival by Aslanyan et al. presenting their ‘learn-as-you-go’ algorithm for speeding up posterior sampling via emulation of the cosmological likelihood function.  As the authors note, emulation is now a common strategy to deal with the long compute times for the exact likelihood—long being relative to the need to make many evaluations (~tens of thousands) during an MCMC run.  One common strategy is to use a Gaussian Process model as smooth interpolator for the likelihood (or alternatively an artificial neural network; which is more-or-less the same thing: a flexible model for learning from a set of training data, with the output function regularised in some sensible way).  In the present work Aslanyan et al. adopt a ‘simpler’ (presumably faster) polynomial interpolation method but aim for a competitive advantage in the decision making process for building the training set : that is, they aim to build it adaptively with respect to an error minimisation goal (as opposed to, e.g., building it step-wise in large, clunky batches via ‘history matching‘; GP’s being difficult to fit over large dynamical ranges).  I think this is a worthy ambition, although I couldn’t satisfy myself that the error modelling approach in the current version is correct.

In the ‘learn-as-you-go’ scheme the posterior density at each sampled point is assigned a local error estimate via cross-validation which is then adjusted to a global error estimate for the marginal posterior density in each parameter via some calculations [to follow the working use: log(1+x) approx= x in both directions] and CLT arguments (which I disagree with) presented in Section 5.  My first problem is that I don’t think that errors in densities are a sensible target: what’s important are the errors in either some specific posterior functionals (like the mean and variance-covariance matrix) or perhaps the credible intervals, or more generally the errors over some large class of posterior functionals.  Of course, this is a fiendishly difficult analysis problem: see, for example, the working in Deylon & Portier where a functional CLT is proved for a similar problem (importance sampling where the proposal density, though known precisely, is instead estimated via a leave-one-out KDE approach, to achieve theoretically a faster-than-root-n convergence rate).  My point is not that I expect the astronomers to make such a proof, but that it is important to recognise that posterior functionals are ultimately the target of inference and that the local error in density is irrelevant when one’s calculations will ultimately involve the single derived instance of the global density.

{In the same spirit I would point out that the Lyapunov CLT invoked in the present work is only valid for independent random variables, whereas errors in the emulation problem are mutually dependent (governed by the non-local discrepancy between the true likelihood and the interpolation polynomial).}

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

2 Responses to Training a cosmological likelihood emulator …

  1. You may not have noticed but the authors are all fellow Aucklanders (upstairs in the physics department). I’ve pointed them to your post.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s