Skip to content

“Check out table 4.”

A colleague sent along this article and writes:

Check out table 4. this is ERC funded research (the very best of European science get this money).

OK, now I was curious, so I scrolled through to table 4. Here it is:

Yup, it’s horrible. I don’t know that I’d call it cargo cult science at its absolute worst, but it’s pretty bad.

(The full story, for those who are not familiar, is that p-values are super-noisy—see, for example, the first full paragraph on page 5 of this article, hence summarizing a study by statistically significant p-values is a criminally wasteful presentation of results.)


  1. Vishy says:

    Did someone notice the text discussion surrounding these tables?

  2. markus says:

    I have produced (though not published) such graphs and find them useful for quick identification of patterns. Table 4 shows the results of 5*5*2 = 50 significance tests and it’s difficult to convey the pattern of these results in a manner that is both (a) quick and (b) open enough to allow the reader to form his/her own conclusions.
    Seeing that there is also a table 5 and a table 6 in similar manner, the authors apparently felt that this worked for them and was useful to readers. Could this be improved? Certainly! But, taking a glass half full perspective, is it an improvement over a wall of test with 50 p-values and and a revelation from on high what patterns there are and are not? Also certainly yes!

  3. I reread John Ioannidis’ 2005 paper recently and found it surprising how it was thoroughly steeped in the mindset of type 1/type 2 errors, talking endlessly about “real” relationships and “no” relationships, “true” findings and “false” ones. I guess it’s not that surprising given the paper’s title.

Leave a Reply