Dataset splitting to probe misspecification …

A recent arXival motivated by CMB experiment posteriors presents a heuristic for understanding the expected degree to which adding new data should reshape the posterior of a well-specified model (under the ‘nice’ scenario that the experimental data overwhelms the influence of the prior and even the subsetted posterior is close to Normal).  The flip-side of this analysis being the hope to therefore be able to “make judgements about the internal coherency of the data and the appropriateness of a model for describing those data.”  I.e., a sanity check on whether or not we’re in the misspecified setting.  This seems to me to be (to some extent) drifting towards the contemporary trend for data subsetting and re-sampling (read: bootstrapping) as a means to ‘robustify’ Bayesian analyses against misspecification and to test for misspecification.  See, esp., Lyddon et al. (2019) and references therein.  In a recent talk (on an upcoming paper), Jonathan Huggins, described a testing procedure for misspecification based on Bayesian bagging in which the covariance matrices of the bootstrapped and original posteriors are compared; this in particular seemed relevant to the ideas explored in the arXival above.

Having pointed out the similarities, there is one big difference (beyond the obvious difference in partitioning vs bootstrap subsampling): the methods developed in the statistics literature have all added one additional criterion to the ‘niceness’ of the scenario: that the data are iid (independently, identically distributed).  This restriction is essential to ensure that bootstrapping is naturally defined but also ensures that the class of models being investigated have nice BvM results and fast convergence towards Frequentist style coverages.  Where I would be concerned about the current method is that the apparent Normality of posteriors is insufficient to guarantee good coverage (though the reverse tends to follow).  For example, posteriors on the hyper-parameters of a random field covariance function may be near-Normal but still poor in coverage in a way that is prior sensitive: both problems remaining to be checked I suppose before applying the above in practice.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s