Gamma-Poisson mixtures

Laura Hatfield writes,

My question is pretty technical, but I come to you since you have written on the sort of gamma-Poisson mixtures I am dealing with (namely, in the R2WinBUGS manual).

My data are counts of unsafe sex acts reported by men in a randomized controlled trial of a behavioral intervention over time. These are extremely skewed and zero-inflated, so Poisson is not going to work. However, draws from Negative Binomial (with, for example, r=0.1, p=.03 or, equivalently, mu=3, theta=0.1) look very much like the observed data at a single time point.

I would like to model these counts (Y_ij) using random effects to deal with the correlation among measurements taken each i=1,…,675 man at j=1,…,3 timepoints. I would like to express the NegBin as a gamma-Poisson mixture, thus:

Y_ij ~ Poisson(lambda_ij)
lambda_ij ~ Gamma(alpha_ij, beta)
log(alpha_ij) <- beta_0 + beta_1*time_j + beta_2*treat_i + gamma_i gamma_i ~ Normal(0,sigma^2) Proc GLIMMIX in SAS refuses to converge, so I am trying WinBUGS. The problem is that WinBUGS hangs when the alpha_ij parameter gets small, since the Gamma pdf has a gamma function of alpha in it. The FAQ describes this (http://www.mrc-bsu.cam.ac.uk/bugs/faqs/winbugs.html) and suggests putting a lower bound on possible values. However, my alpha parameter is a logical node, so I don't think the I(.,.) bounding works. Any suggestions for a fix, or more broadly for re-parameterizing such a model?

My reply:

First, you could try lmer() in R or one of the multilevel model implementations in Stata; these might work better than Sas. which I’ve heard lots of bad things about.

But it should be possible to do it in Bugs. In a gamma distribution, I seem to recall the alphas as being the shape parameters and the betas being the scales, so shouldn’t you be modeling the betas, not the alphas, with that lognormal distribution? Or maybe I’m missing something here.

If you do need to constrain things somehow, I’d recommend doing it not through hard constraintes (the I(.,.) bounding in Bugs) but through soft constraints in the form of informative prior distributions on beta_0, beta_1, beta_2. (Also, it’s confusing that you have two different sets of betas in your notation.) The best way to do this, I think, is to center the predictors (certainly, center the time predictor) so that the prior distribution on the intercept is more interpretable.

4 thoughts on “Gamma-Poisson mixtures

  1. You're right, Laura should be modelling the betas. She could then put whatever prior she likes on alpha – dunif(0.01, 100) might be a good choice, or an exponential distribution. When alpha gets large, the distribution will be pretty much Poisson. The cutoff is subjective, my prior has most mass between about 50 and 100, and is also hierarchical, with daily variation.

    Bob

  2. Wow- such an obvious fix. I re-ran today (modeling the appropriate parameter) and it worked well. Thanks also for the link to Jen et al paper.

Comments are closed.