A figure prepared for the revision of our paper on recursive marginal likelihood estimators. The likelihood function (if measurable, of course) induces a measure on the (positive) reals, as do other measurable transformations using the likelihood function, such as the prior mass cumulant w.r.t the likelihood used by nested sampling, and the negative log likelihood used by the (physics-inspired) density of states. All but the prior mass cumulant allow for recursive marginal likelihood estimation via Vardi’s method, even when the prior measure is not absolutely continuous with respect to Lebesgue measure (i.e., does not admit a probability density).
An interesting sidebar: if we could sample X’s from L^-1(X)*dX then we could apply Vardi’s method directly in this space as well (in reality we can’t, this is the “trick” of nested sampling to estimate the X’s) … but if we could then would Grenander’s NPMLE for non-increasing density functions somehow be applicable here too? and if so, could it do better than Vardi’s method? [perhaps these two have been compared in the literature before?]