Conor Goold writes:

I was reading this overview of mixed-effect modeling in ecology, and thought you or your blog readers may be interested in their last conclusion (page 35):

Other modelling approaches such as Bayesian inference are available, and allow much greater flexibility in choice of model structure, error structure and link function. However, the ability to compare among competing models is underdeveloped, and where these tools do exist, they are not yet accessible enough to non-experts to be useful.

This strikes me as quite odd. The paper discusses model selection using information criterion and model averaging in quite some detail, and it is confusing that the authors dismiss the Bayesian analogues (I presume they are aware of DIC, WAIC, LOO etc. [see chapter 7 of BDA3 and this paper — ed.]) as being ‘too hard’ when parts of their article would probably also be too hard for non-experts.

In an area in which small sample sizes are common, I’d argue that effort to explain Bayesian estimation in hierarchical models would have been very worthwhile (e.g. estimation of variance components, more accurate estimation of predictor coefficients using informative priors/variable selection).

In general, I find the ‘Bayesian reasoning is too difficult for non-experts’ argument pretty tiring, especially when it’s thrown in at the end of a paper like this!

Along these lines, I used to get people telling me that I couldn’t use Bayesian methods for applied problems because people wouldn’t stand for it. Au contraire, I’ve used Bayesian methods in many different applied fields for a long time, ever since my first work in political science in the 1980s, and nobody’s ever objected to it. If you don’t want to use some statistical method (Bayesian or otherwise) cos *you* don’t like it, fine; give your justification and go from there. But don’t ever say not to use a method out of a purported concern that some third party will object. That’s so bogus. Stand behind your own choices.

That is weird. I am mixed-effect modeler in ecology. We use both rstanarm and rstan for models. Typically we use loo for comparisons. Both the Bayesian models and their comparison technique are fairly straight forward to explain to non-experts. I also work as a statistical consultant for using mixed-effect Bayesian models for the local graduate and undergraduate students in biology.

I am learning Bayesian analysis now and have difficulty in writing the data analysis section and results from the rstanarm and stan models.

pls do you have any paper that can help me do this better or any such references? I am looking at effects of matrix type and distance (nearness to forest edge on body and physiological condition of the grey headed bristlebil a common insectivorous birds in west Africa. I used the stan_glmer function. I can do the posterior probability density plots and also MCMC traceplots as model evaluation checks. etc. my major problem is i will like to know how these is applicable in ecology. I am only an applied user of both statistics and R.

I’m told ecology has a relatively large number of Bayesians, but that they’re still a small minority.

In natural language processing, I’ve gotten pushback on any method that isn’t exactly the same method as the last paper or isn’t improving some benchmark result by some epsilon. This isn’t the Chomsky problem, this is just people in linguistics and computation not knowing much stats. I found one review so frustrating that I posted the discussion online in my old blog in a post titled Lacks a convincing comparison with simple non-Bayesian approaches. So when it came time to write the journal article, I removed the hierarchical component (too hard to understand for both me and the NLP community at the time) and the Bayesian inference and just went with point estimates. Things are changing though, as our more recent paper (first author is Silviu Paun—I don’t have the stamina to first-author journal articles any more), which is fully Bayesian, was just accepted by

TACL.In biology, Mitzi was one of many first authors on a

Sciencepaper, Integrative analysis of the Caenorhabditis elegans genome by the modENCODE project., and the pushback from the editors was “no statistics without p-values”. She wound up having to manufacture p-values for their exploratory data analysis (using latent Dirichlet allocation, which isnt’ exactly amenable to classical p-value tests!).So yes, out in the applied world, when you don’t get total selection over submission or your field hasn’t caught up to Bayes yet, then there are problems.

There’s also a problem if you don’t know the classical stuff and try to apply for jobs. It looks like socially, it’s premature to give it up for those who don’t have tenure.

[edit: Forgot to mention there was a previous discussion on Andrew’s blog about model fitting for mixtures/clustering for ecology. Let’s just say Andrew’s post didn’t change the approach of most ecologists when comparing Bayesian models. We get questions about this kind of model comparison all the time on the forum and it turns out some people’s advisors or collaborators or journal editors don’t like our standard advice (which is to read whatever Aki et al. wrote most recently on the problem).]

The phylogenetics folks I used to work with thought Bayesian methods were appropriate — but there were often problems in things like communication and conceptual understanding.

For example, a grad student I was working with seemed to have a confused notion of what “hyperparameter” meant. But when I looked at the documentation for the software she was using, I could understand why; if I remember correctly, it used hyperparameter in the usual way, but also used it as the label for a function that gave several options, only one of them being using a hyperparameter rather than a fixed parameter.

Bob,

I am one of two in our department. Most graduate students are open to the idea of using Bayesian methods. But they are left explaining to their advisors why there are no p-values. A lot of the ecology leaning journals are still heavily skewed towards no p-value, no publishing.

It has been conclusively proven (far beyond any possible doubt) that p-values are “too hard” for the same people to use properly. There probably hasn’t been a more conclusive discovery in all of science for the last 70 years.

So what in the world are they talking about?

As an ecologist, I can confirm that although a number (in the younger generation) are very open to change, there are many people who are very hesitant to move to a Bayesian approach. And these people tend to be the ones who teach intro stats classes, the supervisors of the Bayesians, and the gatekeepers of the journals.

Ecologists are pretty statistically sophisticated (compared to most biologists), but it seems to me that their statistical education never really progressed beyond the mid-1990s.

It may be a special use case, but in litigation, not using methods because some third party (ie the judge or jury) might object is behind about 95% of the choice of method.

Sad, but nor surprising.

oops — “not”, not “nor”.

I used to think it was sad, and it’s certainly highly confining, but when you come to think about it, the purpose of all statistical procedures is to help people understand something. Lay audiences come in thinking you’re trying to pull the wool over their eyes in the the first place — procedures that require more explanation are worse procedures. What makes the work fascinating is when the simple explanation is wrong. You have to be able to pithly show that and show how you can correct the error. What you are *not* trying to do is convince the listener how smart you are. That is (and this is even sadder) the goal of the bulk of academic work, IMO.

I think there are two listeners for journal articles—the reviewers and the readers. The reviewers are the ones that need to be convinced that the work isn’t “too simple” or “too obvious” to be published. It’s one of the many reasons I’ve pretty much opted out of the journal game.

You’ll find that academic books almost always take the opposite tack of trying to be as simple as possible (not always succeeding, of course). That’s because books are a seller’s market, whereas journals are a buyer’s market. There’s almost no content reviewing on books.

For job talks, I hear people constantly urging students to make things more technical precisely to show the hiring committee how smart you are. When I worked at Bell Labs, my knucklehead manager’s mantra was “the more equations the better for a talk.” The problem is that snowing the audience under with bullshit doesn’t work if you lose the audience at the point they’re supposed to understand why you’re taking the mathematical journey. The better play, in my experience on both sides of the hiring committee, is to actually explain something complicated to people so that they get it. Then your audience thinks they’re smart and that you’re a great speaker. Even I can throw up enough math to confuse listeners—that doesn’t prove anything.

I mostly agree with Bob here FWIW :) My experience is that a great seminar involves some high level concepts, but is clearly motivated (i.e. why should audience care?) and lucidly explained, so that an intelligent layperson could follow without much trouble. Maybe if you are presenting to a physics, math or stats department, including some detailed math is good- but outside those fields, you really want to keep it mathematically concise and simple. You want to provide enough detail to show or at least suggest depth, but not get lost “in the weeds”. It really is a pretty specialized art- but I would say that trying to wow a hiring committee with lots of math and technical “terms of art” is not a good idea!

Of possible interest: An oft-quoted article “How to Talk Mathematics” by mathematician Paul Halmos

http://vac2009.ltumathstats.com/Home/rules-for-the-gordon-preston-prize/how-to-talk-mathematics

On the side of “very Bayesian ecologist”, Perry de Valpine, PI of the NIMBLE package, is an ecologist.

Hi, same problem here too. I used to see many articles saying Probit model is often ignored as a tool for analyzing ordinal data because of difficulties in interpreting the result. I trying to break this taboo by using a Multilevel Bayesian ordinal probit model in my analysis :). BTW, if by incident one of you are my reviewer what report your expect in investigating the impact of exposure on the ordinal response? (estimated regression coefficient, marginal probabilities, or odds computed from marginal probabilities).