More dumping on the NeurIPS Bayesian Deep Learning Workshop …

Today I noticed another paper on astro ph that irked me, and again it turns out to be accepted at this year’s NeurIPS Bayesian Deep Learning Workshop.  This particular arXival proposes to explore a Bayesian approach to construction of super-resolution images, in particular, to explore uncertainty quantification since “in many scientific domains this is not adequate and estimations of errors and uncertainties are crucial”.  What irked me?  One thing was the statement: “to the extent of our
knowledge, there is no existing work measuring uncertainty in super-resolution tasks”.  That might be true if you consider only a particular class of machine learning algorithms that have addressed the challenge of creating high resolution images from low resolution inputs, but this general problem (PSF deconvolution, drizzling, etc.) has been a core topic in astronomical imaging since the first CCDs, and in this context there are many studies of accuracy and uncertainty.  Likewise, the general problem of how to build confidence in reconstructions of images via statistical models without a ground-truth to validate against is also well explored in astronomy.  The first ever black hole image (‘the Katie Bouman news story’) addressed this challenge through a structured comparison on images separately created by four independent teams using different methods.

Another thing that irks me is that I find the breakdown between types of uncertainty—“Epistemic uncertainty relates to our ignorance of the true data generating process, and aleatoric uncertainty captures the inherent noise in the data.”—to be inadequate.  Here this is really just proposing a separation between the prior and the likelihood, which runs against the useful maxim for applied Bayesian modelling that the prior can only be understood in terms of the likelihood.  That said, I also wouldn’t call this a Bayesian method since the approximation of the posterior implied by dropout is like zero-th order.  Don’t get me wrong: dropout is a great technique for certain applications but I think the arguments to suggest it has a Bayesian flavour are rather unconvincing though attractive to citations.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s