Last week a postdoc in my lab received a rejection letter from a high profile stats journal, with the reason for rejection given being that the problem was already solved in existing software such as INLA. Which was odd because we use INLA all the time at work and the whole reason we decided to embark on the project described in the rejected manuscript was because INLA did not offer a solution for this particular problem. My suspicion is that the editor or associate editor did a quick google search on the topic and found a paper with an unnecessarily general title. That is, a paper whose title suggests that a general problem is solved therein, rather than the very restricted problem that is actually examined. (In this case the problem is combination of area and point data, which is trivially solved in INLA under the Normal likelihood with linear link function, but is not solved in INLA for non-Normal likelihoods with non-linear link functions.)
For this reason I would say that I’m more than a little skeptical about the clickbait motivation for the title given to this recent arXival: “Uncertainty Quantification with Generative Models”. Which is sufficiently broad as to encompass the entirety of Bayesian inference and most of machine learning!! And in which you would probably expect to find something more substantial than a proposal to approximate posteriors of VAE style models via a mixture of Gaussians, obtained by local mode finding (optimisation from random starting points) followed by computation of the Hessian at those modes. But apparently this novel idea is accepted to the Bayesian Deep Learning workshop at NeurIPS this year, so what do I know?!
If I’m going to start beef with the machine learning community then I may as well say something else on the topic. Recently it came to light that an Australian engineering professor was fired from Swinburne University for having published a huge amount of duplicate work: i.e., submitting essentially the same paper to multiple journals in order to spin each actual project out into multiple near-identical publications. The alleged motivation for doing so was the pressure to juke ones own research output stats (total pubs and total cites). Which is funny because I don’t know of many machine learning professors who don’t have the same issue with their publications: multiple versions of the same paper given at NeurIPS, AISTATS, ICLR, etc., and then maybe submitted to a stats journal as well!
And in other indignities, the third author on this arXival is on a salary that is over 2.5 times my Oxford salary.