A common criticism of malaria mapping work that I’ve been involved with runs to the effect that we don’t do the right thing when we correct for the difference between rapid diagnostic test (RDT) estimates of parasite prevalence and microscopic diagnosis estimates of parasite prevalence. For instance, this paper points out that in Mappin et al (2015) (where we examine the relationship between RDT and microscopy prevalence) the fitted model has “a major limitation” being that we don’t allow for over-dispersion in any unmeasured risk factors, and proposes that the solution is a model in which each of the RDT and microscopy prevalences takes its own geospatial random field as residual error term. This is silly because we already acknowledge large contributions of over-dispersion due to differences in diagnostic types, treatment history, and fever status; and it’s far from obvious that any further modelling of unexplained over-dispersion will achieve any meaningful improvement to fit for such a small sample size. Furthermore, when the authors of that paper come to demonstrate application of their preferred model to the dual malaria diagnostic problem (in that instance it’s microscopy vs PCR) they end up deciding that actually “estimating components of residual spatial variation that are unique to each diagnostic may be difficult”, and so resort to fitting a model without residual spatial variation about the diagnostic-to-diagnostic relationship.
Another paper claims that our models “did not consider that malaria prevalence from national and sub-national household surveys can be influenced by the uncertainty in the accuracy of diagnostic methods used during the surveys” and suggests that they have a method for estimating “true malaria prevalence”. However, as I’ve asked since 2 years ago in the comments of that paper, the authors do not have any clear definition of “true prevalence” in their model. The point is that RDT and microscopic diagnosis are measuring two different things: the former shows the presence of an antigenic response associated with current or recently clear malaria parasite presence in the blood, the latter identifies contemporary parasitaemia only. Hence, we cannot look at things like RDT positive, but microsopy negative as a failure of microscopic diagnosis.
Having acknowledged that diagnostic standardisation is not a trivial epidemiological or statistical modelling problem, it is still very important to consider, since failure to account for the difference between the two may lead to mis-estimation of temporal trends. Especially because the ratio of microscopic prevalence data points to RDT prevalence data points in literature review and national survey datasets has radically changed over the past few years. And the difference in prevalence estimates can more than a factor of two, as in the recent Tanzanian MIS surveys. Nevertheless, some studies looking at temporal trends don’t bother to attempt any adjustment, which I find rather surprising.