Long time, no post. No compelling reason for the lack of activity here, but I did have a couple of seminars to prepare—an Oxford graduate seminar on burden of diseases in the fantastic new stats building & a Warwick CRISM seminar on (statistical) challenges in malaria studies—plus I’ve been at turns outraged and depressed by a shitload of bad science in the non-astronomical domain (including but not limited to the UoW anti-vaxxer thesis debacle from which my emails to Prof Raper & co have yielded only boilerplate admin responses, no one willing to engage on the science). One minor news item that didn’t take up any of my time or thought but which might be of interest to anyone who wonders if I troll but don’t get trolled back would be the comments (scroll down) on our Nature paper—one guy who thinks robots are a practical tool for disease burden estimation in Africa and another who thinks only convex problems can be studied with any clarity.

Astrostatistics

Two papers in the past couple of weeks on astro ph address the problem of inferring the covariance matrix (or its inverse, the precision matrix) of a multivariate normal. The first, by Sellentin & Heavens, deals with a problem of importance in cosmological studies, in which the observed data are believed to be drawn from a multivariate Gaussian sampling distribution; the mean and covariance of which are in principle determined by a cosmological model with some unknown parameters, but in fact *the covariance/precision matrix for each given choice of parameters can only be estimated using expensive mock data simulations*. Past analyses in this field have observed that the sample covariance matrix from a small number of mock data simulations is, though unbiased for the true covariance matrix, a biased estimator of the precision matrix needed for likelihood computation; for which a proposed workaround was to introduce a rescaling of the sample covariance to de-bias the estimated precision. However, this solution ignores a key point: that the sample covariance is inherently a noisy statistic so simply having it to be unbiased is not sufficient to ensure our parameter inferences (e.g. from MCMC plugging in these estimates of the covariance matrix) will be the same as if we were using the true likelihood.

Sellentin & Heavens therefore propose a Bayesian approach in which both the observations and mock datasets are treated as data, such that the unknown covariance/precision matrix is to be marginalised out given a suitable prior. Since the sampling distribution of the sample covariance matrix is Wishart a Jeffrey’s prior suffices to ensure that this marginalisation is tractable: the resulting likelihood now being a multivariate t (i.e. fat tailed) rather than a multivariate Normal. When the simulations to estimate the covariance matrix are expensive and hence few available the difference between these two likelihood functions is substantial, which could potentially change one’s conclusions drastically from those recovered using the (wrong) Normal likelihood!

A side bar that I didn’t think of the first time I read this paper (at pre-print stage back in September 2015) was that one might try to develop a pseudo-marginal MCMC scheme using the unbiased estimator of the precision matrix (assuming one runs new mock simulations to estimate this at each proposal, while keeping the current estimate for the current chain position fixed). However, this only allows an unbiased estimate of the log-likelihood, not the likelihood itself (necessary for pseudo-marginal MCMC) so we come back to the f-factory work of former Oxfordian, Pierre Jacob, whose theorem I believe shows that there is no suitable way to forge a version of the unbiased precision matrix estimator that would also be unbiased for the likelihood.

The other paper on the topic of covariance matrix estimation on astro ph recently was that by Figueira et al. entitled “*A pragmatic Bayesian perspective on correlation analysis*“. From the ‘pragmatic Bayesian’ premise I thought the authors might have developed a novel semi-parametric approach to correlation analysis, perhaps one with highly flexible forms for the marginals and their copula. But in fact the article instead contains only an elementary description of how to infer the correlation coefficient of a bivariate Normal model. Although some new-comers to Bayesian statistics might appreciate the pedagogical presentation here, I found the language somewhat patronising (a bit like the discussion in ‘red ones, blue ones‘) and was disappointed not to see any discussion of sanity checks for model suitability, especially since a look at the fitted data suggests a highly non-Normal distribution to me!

Hi Ewan,

Sorry to bring up an old post, I was just wondering if you could suggest a few sanity checks to test if the bivariate model is in fact appropriate. Our solution in Figueira et al was pragmatic in the sense of being a practical solution to infer the correlation parameter. If we could use this approach or a generalization to learn more about the data that would be great.

Hi João, it looks to me as if the nearest non-Normal model is a 2D Student t distribution. A Bayesian model comparison could be made between the Student (d.f.=1) and Normal models, supposing the same priors on the mode and scale matrix of the Student t distribution as are assumed on the mean and covariance of the Normal. The marginal likelihood of the Student t model is (i think) not available analytically so would have to be computed with (e.g.) multinest for comparison against that of the Normal.