The hyper-parameter method for unification of cosmological experiments …

I noticed a paper by Johnson et al. presenting cosmological constraints from the 6dF galaxy velocity survey on astro ph today.  What caught my eye here was their discussion of the “hyper-parameter method” for combined inference from a collection of individual datasets that “may contain systematic errors, requiring them to be re-weighted in the likelihood analysis”.  The so-called “hyper-parameter method” introduced by Lahav et al. (2000) and Hobson et al. (2002) proposes to solve this problem by supposing the scale of the random errors in each experiment to be a free (hyper-)parameter, assigned a Jeffrey’s prior and marginalised out in the final analysis.  My problem with this idea is that it involves a highly non-standard definition of systematic error: which would more conventionally be defined as  a measurement bias that cannot be reduced to zero, like that of the random error, simply by averaging over more and more data.  While we might (given due cause) still introduce a hyper-parameter to model uncertainty in the scale of our random error estimates, a far more principled approach to the problem of reconciling surveys under possible systematic errors is to follow the mixed effects model approach popular in clinical meta-analysis studies (for which a contemporaneous equivalent to the Lahav et al. reference would be that of Sutton & Abrams 2001).  Oh, and I also have a discussion of these ideas in my fine structure II manuscript

This entry was posted in Astrostatistics, Statistics and tagged . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s