“Or, to take a more extreme case, the many of the papers of Brian Wansink, where it’s not clear how the data were even collected.”

GS: I don’t know if this applies to Wansink, but I am of the opinion that, in mainstream psychology, the lack of clarity concerning the procedure that is often observed, even in the Methods section, sometimes stems from the fact that what is measured is not what is of interest – what is of interest is the “thing” or “process” for which the manipulations and measurements are, ahem, merely “operational definitions.” And when it comes to the popular press and press releases…forget about it!

]]>I cannot unhear this.

]]>Don’t get me wrong, the talk is really good, I watched it in one sitting after leaving the comment.

]]>Andrew:

I was trying to ask for a definition or list of criteria, etc. for “fully explain their data”. Examples don’t help much. So let me try to ask again another way:

Would I be correct in assuming that “fully explain one’s data” includes all of the following:

Explain what measures were used, and why those choices were made.

Explain details of data collection procedures, and why those choices were made.

Explain what unexpected problems (that might be relevant to interpretation of or inferences from the data) occurred when collecting data (e.g., different understandings of questions asked; circumstances that resulted in missing data, especially not-missing-at-random missing data.)

Data cleaning procedures.

Any anomalies that appeared in the data.

Anything else?

]]>I think how is kinda a silly question. Presumably he just didn’t shave for a while.

]]>> if viewed classically, is positively biased (type M error)

That requires a recognition of how science is currently being done (mostly the set of confidence intervals that did not cross zero or by much are _remembered_).

That is an important recognition that is conveniently ignored classically?

]]>Garnett:

I believe that is a good guess.

Shravan:

Yup, I don’t even have much of that in my own books.

]]>Garnett, Cliff:

There are lots of ways of summarizing a statistical analysis, and it’s good to have a sense of how the assumptions map to the conclusions. My problem with the paper discussed in my talk was that they presented a point estimate of an effect size magnitude (42% earnings improvement from early childhood intervention) which, if viewed classically, is positively biased (type M error) and, if viewed Bayesianly, corresponds to a substantively implausible prior distribution in which an effect of 84% is as probable as an effect of 0%.

If we want to look at the problem classically, I think researchers who use biased estimates should (i) recognize the bias, and (ii) attempt to adjust for it. Adjusting for the bias requires some assumption about plausible effect sizes; that’s just the way things are, so make the assumption and be clear about what assumption your making.

If we want to look at the problem Bayesianly, I think researchers should have to justify all aspects of their model, including their prior distribution. Sometimes the justification is along the lines of, “This part of the model doesn’t materially impact the final conclusions so we can go sloppy here,” which can be fine, but it doesn’t apply in a case like this where the flat prior is really driving the headline estimate.

]]>“Couldn’t you just graph out the estimated effects under different priors in an Appendix figure?”

Isn’t this only a small step away from just presenting the MLE estimates (assuming relatively small parameters / samples ratio)? Or at least arguing for your prior and including it in the main analysis, and then only including the MLE + Hessian in appendix.

If you’re content with the normal approximation of your estimates, then the MLE point estimates + Hessian approximates the posterior under a flat prior. And if you’re content with using normal priors, you can easily slap them on post-hoc however you want if the MLE estimates + Hessian are provided!

Of course, if you’re dealing with the case of a large number of estimated parameters relative to the number of samples, you may not be content at all with the MLE + Hessian approximation.

]]>I strongly agree with this. However, I think that in many fields there is a notion of statistical analysis as being either correct or incorrect, and it is the reviewer’s job to point that out. The concomitant view may be that the reviewer is responsible for identifying a correct or incorrect prior. That’s my guess, anyway.

]]>Martha:

The part about data X,Y is not so relevant to the example that I discuss in the talk, but it comes up in other examples, for instance the notorious paper on air pollution in China where it appears that life expectancy is 91 in one of those cities. Or, to take a more extreme case, the many of the papers of Brian Wansink, where it’s not clear how the data were even collected.

]]>Reviewer should be able force the author to be more accountable for their assumptions and what these imply?

]]>Andrew: I understand what you mean by “fully state their assumptions A, B, and C”, but I’m not clear on what you mean by “fully explain their data X, Y”. Please clarify. (Or refer to part of the video where that appears, if the explanation is there — I don’t have the time to watch the whole thing.)

]]>Yeah, that’s what we usually do, and it seems to get us through pretty well.

I think that Andrew feels that the reviewer should be able to have a more forceful influence on the content in the body of the paper. I see his point (and the other commenters), but I’m ambivalent especially outside of the obviously absurd prior.

]]>Couldn’t you just graph out the estimated effects under different priors in an Appendix figure?

An analogous problem in my world is when you have a bandwidth choice for a non-parametric estimate of some sort. At which point if I were a skeptical reader or referee I would just ask for a graph with bandwidth on the X-axis and BetaHat (estimated using that bandwidth) on the Y-axis.

In the paper you can make whatever argument you want for whatever bandwidth/prior you find most convincing, and argue about why extreme deviations from that are problematic. But might as well show all the estimates, right?

]]>I appreciate these insights.

I ask because a recent reviewer insisted that our priors on gender effects should place more probability on big effects (based on the reviewer’s reading of the literature), and we simply don’t agree with that reading.

The reviewer was operating under a set of beliefs that were different from ours, yet was compelled to demand that we adhere to his/her beliefs. My opinion is that we can provide the data and allow the reviewer to redo the analysis and specify any priors he/she wants.

]]>Bryan:

What you write is mostly a good description of what I said, except for one thing: You write that I “demands that the author believes and can justify the prior they chose to use.” I do want the authors to justify their model (all aspects of their model, not just their prior distribution), but I don’t demand that the authors “believe” it. I just want them to make their reasoning clear.

So, in the example discussed in that talk, if the authors want to say, “Given assumptions A,B,C and data X,Y, we come to conclusion Z,” that’s fine. But I want them to fully state their assumptions A, B, and C, and I want them to fully explain their data X, Y. I have a feeling that some of these assumptions are so ridiculous that nobody would be able to state them with a straight face. Hence I hope that a norm or requirement to fully state one’s assumptions (in this case, in the form of a prior distribution and a data model) would lead to some improvement, by embarrassing people away from some of their more extreme claims, such as that claim of 42% earnings improvement from early childhood intervention, or that claim that women are three times more likely to wear red during certain times of the month, etc.

]]>He demands that the author believes and can justify the prior they chose to use. Andrew does not believe that an 80% improvement in earnings prior is as likely as a 0% improvement. He assumes that the author doesn’t believe that either, but chose to use the uniform prior anyway.

]]>It depends on what you think the prior is and what role it has – perhaps see here http://statmodeling.stat.columbia.edu/2017/05/02/prior-information-not-prior-belief-2/#comment-479468

I think this is an instance of what Andrew suggested was an inadequate theory of inquiry (a Bayesian philosophy that was more popular in past) preventing folks from doing things they should have (e.g. checking their prior).

]]>Garnett:

I prefer to talk about prior information, not prior belief. See here for further discussion of this point. The prior distribution and data model (“likelihood”) used in a Bayesian analysis are assumptions that are intended to code information that is typically some subset of all the information available about a problem. So, (a) there’s nothing special about the “prior distribution” and these concerns arise with all parts of a model and data analysis procedure; and (b) our models are just about always tentative and subject to improvement.

]]>I guess I always thought that the prior is a reflection of the analysts belief, not subject to reviewers demands for revision.

]]>