Some quick comments on a number of recent astro-ph submissions I’ve had open as tabs but had not read until today. The first (in chronological order) is on ‘exploding satellites’ (satellite galaxies, not man-made space junk) by Kuepper et al., in which the authors present a Bayesian analysis underpinned by ‘simulations’ of the orbital paths of test particles in a model potential. At first I thought these were stochastic simulations suggestive of an ABC-like case study, but in fact it turns out these are deterministic simulations for a given number of test particles. In order to form a(n approximate) likelihood the final distribution of the test particles needs to be turned into a pdf via kernel smoothing. Naturally one wonders as to the relationship between the posterior approximation and the number of test particles used. This may well be a potential application area for the multi-objective Bayesian optimisation—where optimisation does not have to mean finding the posterior mode, but could mean optimisation of the approximation to the true posterior in e.g. a KL divergence sense—that I’m currently playing with as a tool for fitting malaria simulations. Basically, the idea expounded in papers by Swersky et al. and Kandasamy et al. (with a supernova data example) is to use many cheap, noisy simulations for exploration and fewer expensive, less-noisy simulations for optimisation. It’s an elegant approach but requires some fine tuning to get the acquisition function right and to implement an efficient Gaussian process-based inference procedure.

The next arXival I read today was a conference proceedings by Knezevic et al. in which the authors describe a Bayesian model for identifying Balmer-dominated shocks in spectroscopic ‘images’ of a supernova remnant. The modelling left me with some odd impressions: in part because the data is first binned using Voronoi tessellations (why bin when you can impose a spatial prior which both finds the appropriate scale for sharing information and brings Bayesian shrinkage to the inference), and in part because the authors use a leave-one-out cross-validation (LOO-CV) estimate of the model evidence. The latter is attributed to Bailer-Jones (2012) in which Coryn suggests an idea equivalent to O’Hagan’s (circa 1991) partial Bayes factors; namely, to use some proportion of the data to first constrain one’s (typically) uninformative priors and then use the remainder of the data to evaluate the marginal likelihood. As discussed in O’Hagan’s 1995 read paper (and, in particular, in the contributed discussion appendix therein) there are a number of practical and theoretical concerns with this proposal. The consensus seems to be that if one is going to use this approach one should use only a minimal amount of data for the initial constraint and most of the data for the marginal likelihood evaluation. By contrast, the Bailer-Jones approach is to perform 10 fold cross-validation (i.e., using 90% of the sample to update the prior, and just 10% for marginal likelihood estimation), which is much closer to Murray Aitkin’s much maligned posterior Bayes factors idea.

The final arXival I read this morning was one by Gratier et al. looking at the molecular gass mass of M33. The statistical model here is a Bayesian errors-in-variables regression, following in part the guidance of Kelly and Hogg et al., which seems to be properly implemented. My only question is to the suitability of the (unimodal) Normal forms supposed for the intrinsic distributions of the observed covariates; histograms of the observed and inferred distributions would be useful here. If the Normal is insufficiently flexible there are of course both simple and less simple alternatives: one of which is to use an infinite mixture of Normals which can be quite easy to code up as a Gibbs sampler by cobbling together existing R packages as we found in Bonnie’s paper on calibration of RDT and microscopy-based prevalence estimates.