I noticed this brief note on the arXiv today by Schwetz et al. highlighting some of those authors’ concerns with a recent arXival by Simpson et al. regarding Bayesian evidence for the normal neutrino hierarchy. The question examined by Simpson et al.—and in a recent paper by Hannestad & Schwetz—is to what extent the available data from neutrino oscillation experiments and cosmological modelling should lead us to favour the normal hierarchy (NH) over the inverted hierarchy (IH) of neutrino masses: the former being and the later . The available neutrino oscillation experiments place constraints on the squared differences between two of the three possible mass pairings; these constraints being slightly different for the NH and IH models. The cosmological data, on the other hand, place an upper bound on the sum of masses ( eV) which is close to a lower limit on the sum of masses under the IH scenario of eV. Hence, it is felt that future cosmological experiments may well rule out the IH scenario; notwithstanding the authors’ present interest in precisely how likely should we consider the IH scenario for the time being.
The approach taken by Simpson et al. is to (1) set equal a priori odds on the NH and IH scenarios, (2) specify a generative model for the neutrino masses under each scenario, and then (3) form posterior Bayes odds using the marginal likelihoods with respect to the observed data from both the neutrino oscillation and cosmology experiments together. Their models are given hierarchical priors such that for the NH scenario:
(reading numerical values off their figure labels).
Their prior for the IH scenario is the same except for the assignment of order statistics in the first line. Because the sample space is small and likelihood evaluations are cheap they are able to compute posterior Bayes factors and the marginal posteriors in the hyper-parameters via brute force (the arithmetic mean estimator in the mass space over a grid of hyperparameters).
All this seems pretty straightforwards, and indeed when I skimmed this paper (the Simpson et al. one) when it first appeared on the arXiv I didn’t think anything particular other than, “seems sensible, not relevant for blogging because the marginal likelihood was so easy to compute”. If I understand Schwetz et al.‘s note correctly, their complaint translates (in my interpretation) to be that Simpson et al. set their prior odds on the models before considering the neutrino oscillation data. By contrast, in their earlier (but still quite recent paper) Hannestad & Schwetz decided to begin from the neutrino oscillation experiment likelihoods, letting the posteriors from non-informative priors updated by those constraints be their priors for examining the cosmological data with equal prior odds of each scenario assigned after the neutrino oscillation experiment. By the nature of their analysis, Simpson et al. must go further in specifying a prior on their neutrino masses prior to observing any data. Schwetz et al.‘s objection is that this introduces an artificial weighting of the evidence from the neutrino experiment.
For sure the Simpson et al. approach will be sensitive to the priors adopted (like all Bayesian model selection problems) but it doesn’t seem to me to be a silly thing to do in the context of forming betting odds on the hypotheses from the totality of available data. Further prior sensitivity analyses might be warranted and the nuances of the analysis could be more clearly explained but I personally would conclude that the Simpson et al. analysis is a worthwhile contribution to the debate.