Actually, implementing the cut distribution is difficult as explained in Martyn Plummer’s “Cuts in Bayesian graphical models” paper. That paper is very pedagogical, explains the intractability of the feedback term that appears in the pdf of the cut distribution, and provides a partial solution, i.e. an approximate sampler. To the best of my knowledge, there is no generic MCMC algorithm to sample from the cut distribution.

In separate work on unbiased estimation (https://arxiv.org/abs/1708.03625), my co-authors and I propose a two-step unbiased estimation procedure to approximate integrals with respect to the cut without bias, and we use the same illustrative example as in Martyn Plummer’s paper.

]]>The main contribution (in my view) is to open the discussion on modular inference, by connecting different parts of the literature (see Section 1.2) and to provide diagnostics, i.e. something computable, to help choose between different approaches. The previous literature contains insightful but more informal discussions, except for some toy examples that have been analytically worked out, e.g. in the Liu, Bayarri & Berger 2009 paper.

]]>Actually figure 4 here was an artistic depiction of the regression analysis example.

]]>For instance with a t.test there is a choice as whether to fully pool the variances of the two groups or _cut_ to make separate variance estimates. Or in a regression analysis where there is an interaction of slopes with gender there is a choice again (often over looked) fully pool the residual variances to make separate estimates by gender.

Now in my experience, any attempts formalize or automate such choices has run into serious problems in applications (since Bancroft’s 1940s? paper).

]]>