Fractional Poisson process … is there an astronomical application?

We had an interesting talk at QUT last week from N. Leonenko (Cardiff) on the fractional Poisson process (FPP), a generalization of the familiar Poisson process (PP) with long-range dependence (cf. Laskin 2003).  The name suggests an intended analogy to the fractional Wiener process (FWP), and its long-range dependence obviously means it no longer shares the characteristic “memoryless” property of the PP.  When its domain is, say, R^2 (Leonenko & Mirzbach) with some underlying reference intensity field (perhaps the realization of a Gaussian process) a typical realization of the FPP should have points clustered around the peaks of the underlying intensity field, but with an ‘extra degree of clustering’.

As such it seems to me that the GP + FPP combination could be a handy modelling device for describing non-independent data sampling in a geostatistical context.  But what about astronomy? Could there be a sensible motivation to apply this model to mapping the large-scale mass distribution with galaxies as tracers? Such analyses already use the PP (e.g. Kitaura et al.Nadathur) but could they be using the FPP to take into account the role of galaxy clustering within halos (does that even make sense)?

Important to note is that the FPP does not seem easy to fit (I have the impression that its likelihood might be intractable), though it is easy to simulate from.  (Sounds like an ABC challenge to me!)

Update: Two additional questions.  What would be the advantages of using the FPP in continuous space over the generalized Poisson for counts in discrete cells (Sheth 1998)? And what is the practical difference between the FPP and a marked PP model (Reddick et al. 2013)?

Advertisements
This entry was posted in Infinite-Dimensional Inference, Statistics. Bookmark the permalink.

4 Responses to Fractional Poisson process … is there an astronomical application?

  1. “Important to note is that the FPP does not seem easy to fit (I have the impression that its likelihood might be intractable), though it is easy to simulate from. (Sounds like an ABC challenge to me!)”

    It might make a good prior. Being easy to sample from is important there.

  2. None of those papers have a figure showing the kinds of hypotheses that are typical under the distribution. A key thing that I’d want to see, to decide whether the distribution proposed is a good model for prior beliefs about anything.

    • True, it’s so far only been developed within the theoretical probability context, so it’s quite a gamble to invest time in attempting to find a practical use for it (with one possible outcome being that it’s mathematically intractable, or that in practise it doesn’t improve on Ravi Sheth’s 1998 paper with a generalised Poisson count in discrete cubes). On the other hand, it could lead to an Annals of Applied Statistics or a JRSS B paper, and how much does Figure 2 here look like the large scale structure from simulations: http://hal.archives-ouvertes.fr/docs/00/38/25/70/PDF/fPoissonf.pdf 🙂

      BTW I noticed yesterday that if you put the ratio of prior densities, pi_alternative(theta_i)/pi_original(theta_i), as f(theta_i) into the sum_i f(theta_i) L_i w_i posterior approximation to Eqn 35 in my (w/ Feroz & Hobson) INS draft then you can do prior-sensitivity analysis from nested sampling in a similar way to what I suggest for biased sampling.

  3. Yeah, those pics look cool. Reminds me of microlensing magnification maps too.

    I think Radford Neal’s Dirichlet Diffusion Trees look like dark matter haloes in LCDM, but I’ve never gone anywhere with that idea.

    “BTW I noticed yesterday that if you put the ratio of prior densities, pi_alternative(theta_i)/pi_original(theta_i), as f(theta_i) into the sum_i f(theta_i) L_i w_i posterior approximation to Eqn 35 in my (w/ Feroz & Hobson) INS draft then you can do prior-sensitivity analysis from nested sampling in a similar way to what I suggest for biased sampling.”

    Cool. Nested Sampling is also very good for doing likelihood sensitivity checks. NS wrt a particular likelihood function also gets you near the peak of a lot of alternative likelihood functions. That’s one of the best things about it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s