A recent arXival has proposed a new approach to resolving the tension between the local and cosmological estimates of . Or, more specifically, to solve the pesky problem of having to fit the ‘extended physics’ models to the data—simply vary the non-standard cosmological parameters by hand until they bring the cosmological estimate into line with the local one. An added advantage is that you really win out in model selection terms because if you’re going to compare marginal likelihoods (or ‘evidences’) of the extended model against the base model, you’ve now saved all that wasted probability mass that would ordinarily be squandered on values of the non-standard parameters that don’t improve the agreement.
Of course, this isn’t a recommended approach because: (i) fitting a model using likelihoods is generally better than fitting it by hand; and (ii) the whole point of using the marginal likelihood as a model weighting device is to penalise more complex/flexible models when they are sensitive to extra ‘tuning’ parameters that cannot be predicted confidently before seeing the data. The author’s suggestion that developing new theoretical models that can ‘predict’ the best fitting values of the extra parameters (e.g. ) might be a sensible way to guide theoretical studies is amusingly in line with the skeptical impression that many observational astronomers have about the way theoreticians do in fact come up with their ‘predictions’.
Some interesting technical details of this analysis are as follows.
The author’s preferred value for is the one that visually places the cosmological posterior in directly on top of the local one. Of course, if one were to fit jointly to the two datasets the natural compromise point is going to lie somewhere in between the two peaks. The fact that it feels more natural to align exactly the two posteriors reflects a tendency amongst the community to view Bayesian credible intervals as Frequentist style confidence intervals and to equate ‘tension’ between posterior estimates from independent experiments as indicators of model misspecification (either due to an inadequate cosmological model or in the noise model adopted). Certainly it is true that Bayesian estimates can show poor Frequentist style coverage when the model is misspecified, but it is also true that well-specified Bayesian models can also display poor Frequentist style coverage. This is especially so when prior choices are poor and/or where the information in the available data leaves the posterior far from a comfortable (Bernstein–von Mises) regime of asymptotic behaviour—if any such regime exists (!) for the model considered. Given that cosmological datasets represent something like a single realisation from a spatial stochastic process with in-fill sample design it is not inconceivable that we are outside such a regime. The Bayesian coverage question for new physics parameters has been considered in the particle physics context (including an author of Canberran background!), but I am unaware of an equivalent analysis for the local and cosmological estimates.
Could this really be true? Are we debating tension with the mindset of Frequentists without having quantified how important this disagreement might be from a Frequentist perspective?
Another interesting technical detail is that the author computes marginal likelihood estimates using the Heavens et al. method, which runs solely on the output of a posterior MCMC chain. In general, I am fairly skeptical of marginal likelihood estimates based only on posterior samples, but if the posteriors are low dimensional and well-behaved this is probably okay. In this particular case I would worry about the performance of this method for the model with constraint since the posterior concentrates on the boundary.