Exoplanetary atmosphere retrieval: still not distinguishing between generalities and specifics

I noticed a recent arXival by the group using random forests to approximate Bayesian posteriors from pre-computed model grids of exoplanetary atmosphere (or, this time, brown dwarf) spectra.  Once again I feel that there remains a fundamental misunderstanding of the problem at hand which artificially restricts the focus onto random forests as the default method for using pre-computed model grids versus MCMC/NS (NS=nested sampling) as the default method when using spectra computed on the fly (and, therefore, necessarily simplified for computational convenience).  As a result, most of the advantages ascribed to random forests for parameter retrieval are actually just differences between inference with pre-computed model grids versus grids computed on the fly.  And, now there is a ‘lock in’ effect whereby strategies that may be superior to random forests for handling retrieval with pre-computed grids are completely ignored.

The biggest limitation of random forests as a method for parameter recovery from model grids is that in order to handle the impact of observational noise they need to be run in an ABC style mode (in which mock observations are generated by drawing from the noise model and adding it to the model grids).  Which leads to an avoidable O(1) approximation compared with simple importance weighting of the model grids if the likelihood function is of an ordinary tractable kind.  Another limitation is that the random forest requires a large training library, but often the expensive models can only be used to produce a sparse grid.  In the above arXival the hack used is to linearly interpolate to fill out the grid.  Existing solutions for this type of problem fall in the space of Gaussian process-based model emulators or posterior approximators, which are designed to represent the uncertainty of interpolation.  These techniques then open on to natural strategies for planning new batches of expensive model evaluations (Bayesian optimisation) and/or mixed use of cheap and fast simulators (multi-fidelity Bayesian optimisation) or mixed posterior approximators (transfer learning).

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s