An interesting arXival from last week describes a “Bayesian neural flow” (i.e., masked auto-encoder) model as a data-adaptive prior for estimation of the de-noised (latent) distribution of Gaia catalogue stars in colour-magnitude space. The advantage of neural flow models in this context (density estimation) is the ability to produce a highly flexible distribution model with readily computable normalisation (via the Jacobian); which is otherwise a problem for more commen semi-parametric distribution model (e.g. log Gaussian Cox process models). The authors demonstrate the potential use of this model as a prior (what I would call a ‘highly flexible Bayesian shrinkage prior’) for refining Gaia distance estimates. The learned colour-magnitude diagram is impressive, although as the authors illustrate with the visualisation of this distribution on a log-density colour scale, there is some weird looking filamentary, residual structure where the data are sparse. It seems to me that an outstanding problem for this type of model is how one might best introduce an additional penalty towards smoothness in the projected (colour-magnitude diagram) space of the neural flows to damp down such filaments. Of course, one can easily add an arbitrary smoothness penalty to the likelihood function, but ideally what we need is a penalty that’s reasonably data-adaptive. Inside the Bayesian paradigm the nearest methods circle back to Gaussian processes, while outside of the Bayesian paradigm the nearest comparable methods are those for choosing the penalty on spline fits or kernel smoothing.

- Follow Another Astrostatistics Blog on WordPress.com
### View Posts by Category

### Archive

Pingback: Normalizing constants from normalizing flows … | Another Astrostatistics Blog