Skip to content

Take two on Laura Arnold’s TEDx talk.

This post is by Keith O’Rourke and as with all posts and comments on this blog, is just a deliberation on dealing with uncertainties in scientific inquiry and should not to be attributed to any entity other than the author. As with any critically-thinking inquirer, the views behind these deliberations are always subject to rethinking and revision at any time.

In this post I try to be more concise and direct about what I found of value in Laura Arnold’s TEDx talk that I recently blogged about here.

Primarily it was the disclosure from someone who could afford to buy good evidence (and experts to assess it) that they did not think good enough evidence was actually available right now. Yes there is some real gold among the research outputs – but it just cannot be reliably distinguished from fools gold at present – so don’t buy now or buy at your own risk. Furthermore, believing high quality evidence was necessary in any reasonable attempt to make things better, there was an argument/decision that evidence quality needed to be rectified first. And at the end of their talk they did not blame reporters or careerist researchers but instead claimed it was _our_ fault and that _we_ needed to be prepared to contribute to it being resolved.

That it is up to the masses to fix the current faulty evidence generating machines very much rings true to me. I think that was an important turning point the AllTrials initiative “It immediately occurred to Tracey [Director, Sense about Science] that there had to be thousands of people who took part in clinical trials who felt “I want my participation to count for something—I want the results that you who ran the trial got because I took part to be used.” Here maybe – “I want people who might want to help me to actually have good evidence on how do that”.

I also have more nuanced or speculative take.

p.s. jrc added some comments that seemed to nicely clarify some points I was trying to make.

In particular this one – “Arnold Foundation ultimately cares about facts/relationships in the world, not methods, but has to care about methods because they can’t get the facts they want with the methods people are using.”  Right – that’s the point – if they can’t get the facts [evidence] they want with the methods people are using that’s a problem for _all of us_ [the masses]. Given its a problem for all of us (not just those of us working in research), I think it would be helpful if all of us were better informed about it and made some sort contribution to fixing the problems.

When research consumers fund what I called nuisance research (research to help others do better interest research) there is no downside of an implied admission of needing to do such research. (An annoyance yes, as they rather not have to do it, but no personal or professional embarrassment about having to do it.) On the other hand, now think of research producers funding nuisance research (research to help themselves do better interest research) – that seems like a disclosure of not currently knowing how to do the best interest research. Unfortunately, seemingly credible if not incontestable claims of currently knowing how to do the best interest research possible may be perceived as an absolutely necessity. Such claims are required by universities to support their faculty and present them as important assets to others, by funding agencies to justify who they fund, by faculty themselves to maximise their funding, publication and promotion opportunities, etc. So amongst many research producers there may be insurmountable impediments to any noticeable attempts to rectify the current inadequate quality of research outputs.

Now, I do remember running into this at the University of Toronto when I was looking into ways to provide a statistical/methodological resource for all faculty members (in 1997). It went sort like this – “our faculty are quite capable of doing research on their own – they would not be here if they weren’t!”.  Because of this and others reasons, we went with what we called and “After hours induction club” hosted at the university’s faculty club in the evening so that interested faculty could assess the resource “discretely”. There was an invited statistical expert who gave a short talk on areas they would be interested in applying in practice and then questions and informal discussion followed afterwards. Unfortunately, I left the University of Toronto very shortly after setting this up and only one occurred – it did seem to work well.

It also explains the extremely poor record of the funding of biostatistics in Canada -“Funding for biostatisticians comes from … the Canadian Institutes of Health Research (CIHR) for the clinical research. On the other hand, CIHR rarely funds the advancement of methodological research.”

Perhaps more importantly, given research consumers want to fund what I called nuisance research they will likely need to fund research producers to do this – who else would know how? The concern about this is that some percentage of them may be too well set in their ways as always promoting themselves as the best at currently knowing how to do the best nuisance research given they have successfully “faked” currently knowing how to do the best interest research for most of their careers. Those who have not maintained that reputation will unlikely still be a research producer. Not all or even many will have been “faking it” – but some will have.



  1. jrc says:

    I’m adding “nuisance research” to my CV as one of my research fields. I mean, people already call me a nuisance, and find my research inconvenient, and think I’m annoying.

    • Keith O'Rourke says:

      jrc: Nuisance is in the eye of the beholder ;-)

      The term “nuisance research” is a bit of take on statistical jargon regarding interest and nuisance parameters in statistical theory – interest parameters being what you are really interested in learning (say treatment effect) and nuisance parameters (say the background event rates at particular hospitals) are other parameters that get in the way of learning about the interest parameters. You wish you knew what they were but you don’t and they can be very hard to estimate well.

      It might have been explained better in the earlier post – “In fact, that is where they now believe they need to spend their money (nuisance funding – what they wish they did not need to do) in order to have some hope in the future to fund ways to make the world better (interest funding – what they want to do).”

      • jrc says:

        Oh yeah – I got the nuisance parameter allusion. I really do like the idea of calling it “nuisance research” instead of “methods research”. It emphasizes how only a jerk would bother to do research that shows other researchers that their research doesn’t (can’t) work. I mean, what’s the value to the field of making other researchers feel bad? Getting it right the next time? Improving our understanding of the world? Protecting against the use of poor methods that are likely to produce spurious results in the future if not addressed? Psssshhh…. all that is just a nuisance to the task of publishing papers.

        Since I have the snark dialed up to 11 today, let me clarify. I think that calling it nuisance research a) gets at the idea that it is research we’d rather not have to do if we are actually interested in learning some particular set of facts or relationships about the world*; b) gets at the idea that there is something of the replication troll in the methodologist, and replications (like conceptually difficult methods) are often seen by the original authors (or field) as a nuisance; and c) it has the nice statistical allusion to nuisance parameters you mentioned.

        *the way the Arnold Foundation ultimately cares about facts/relationships in the world, not methods, but has to care about methods because they can’t get the facts they want with the methods people are using.

        • Keith O'Rourke says:

          > It emphasizes how only a jerk would bother to do research that shows other researchers that their research doesn’t (can’t) work.
          Rather only a naive funder would _not_ bother to do [fund] research that shows other researchers that their [current] research doesn’t (can’t) work so that that changes for the better?

          > Arnold Foundation ultimately cares about facts/relationships in the world, not methods, but has to care about methods because they can’t get the facts they want with the methods people are using.
          Right – that’s the point – if they can’t get the facts [evidence] they want with the methods people are using that’s a problem for _all of us_ [the masses].

  2. Stuart Buck says:

    This post is so poorly written, both in grammar and in the logical flow, that I can’t make any sense of it. Perhaps a really rough stream-of-consciousness draft accidentally got published?

  3. jrc says:

    Some thoughts on incentive-reallignment towards the normalization of nuisance research – describing 3 loci upon which consumers of scientific research can exert power: a) on the field; b) on researchers; and c) on journals.

    1 – Field Incentives: a) Demand Replication: refuse to act upon “findings” that have not been verified by independent replication (of whatever sort, not necessarily pure replication of a single experimental finding); b) Demand Validation: demand methods work as necessary basic science and do not make inferences based on the use of unproven or untested methods (the whole “validation” thing in Pyschometrics aims to do this for measurement, say what you will about its success).

    2 – Researcher Incentives: a) Pay them: fund methods work (see Foundation, Arnold); b) Mock them: stigmatize the use of improper methods, particularly when well-known in the literature; c) De-Idealize them: de-stigmatize the admission that previous papers employed faulty methods and should be considered unreliable estimates of the matter at hand… that shouldn’t count against authors (whereas (b) should). In fact, being honest about the limitations of previous work that they were unaware of at the time of publication should ultimately count for authors, as demonstration of their underlying desire to seek and produce knowledge.

    3 – Journal Incentives: a) Demand Methods Research: cite and promote useful methodological research, and ask for more to be published (if only as a service to help you critically evaluate the rest of what the journals publish); b) Hold a Grudge: stigmatize journals that consistently publish poor research or refuse to in any way correct their mistakes (that is, I think, why Andrew keeps the leading-P in PPNAS); c) Reshape the Market (aka Journal Supply): start and subsidize journals that publish stuff about what you need to know… then let the experts run it.

    I confess that partly those thoughts must be a semi-conscious frustration on my part that methods research that isn’t simultaneously an empirical exercise is incredibly hard to publish (meaning papers where testing the method is the ultimate objective, not estimating some parameter related to the world). And so maybe I’m being blind to a whole series of potential incentive-shifting behaviors that could improve things. Maybe some of those ideas are in the TedX talk, but somehow I feel like actually watching a TedX talk would compromise my integrity, so I haven’t watched it.

    PS – this comment was written after the PS by Keith above. You should not view this comment as pre-recommended by Keith. Just the one above where snark=11. This one probably could’ve used more snark.

    • Keith O'Rourke says:


      I believe most of the incentives you listed above have been kicked around in various places by various people over the last 20 or 30 years with little impact. Now, there has not been much awareness and discussion of this among the general public and that might help give policymakers and governments some impetus to consider how to change things. My secondhand knowledge of the AllTrials initiative suggests that was important there. My bringing attention to Laura Arnold’s TEDx talk was my thinking the general public would get from that talk the sense that there is a real problem, it affects them and needs to be addressed. Folks who read this blog regularly likely won’t learn much new from it.

      Another issue arises with regard to how challenging it might be for most current researchers to fully participate in resolving the challenges. I met a number of academics and scientists who left communist countries during the cold war and many intimated they had a very hard time not acting like they were still in a secret police bubble. As one of them put it, its like walking with cane for years and then having your leg fixed – you still keep walking with a limp for a long time.

      Now, very few academics survive in academia without puffing up their work and abilities no matter how good their work or abilities. The dean, funding agencies, journal reviewers, conference organizers, ect. just cannot be trusted to get it. Especially given how much puffing other people our doing. At one university (not in the top 20%) the dean’s interview question to screen out unwanted faculty was “Convince me that you are the best at what you do in the world.” So when funders like apparently Gates ask researchers to let them know about their failings in the research they funded – I am not sure that’s going work very well.

      > Pay them: fund methods work
      Here’s an example. When I was with the Cochrane Collaboration they had agreed to fund a statistician (not sure if half time or full time but for a year or two). I suggested they have someone do detailed statistical reviews of the pending and past meta-analyses, contact the meta-analysis authors to work through how to fix any real problems and then document the problems and write up resources (e.g. case studies and how to materials). Others did not like this and suggested it would be better have someone to work with on publishable methodology research – which of course would more likely enhance their reputation, influence and careers. Why fix real problems when you could be fixing your own career instead.

  4. jrc says:

    “Now, very few academics survive in academia without puffing up their work and abilities no matter how good their work or abilities.”

    I agree that this is empirically true, but I wonder how much of it need not be. In part, my comment about “De-Idealizing” researchers was aimed at making it so that this no longer feels necessary for people. But I also wonder about the extent that it is actually necessary versus just assumed to be necessary because everyone sees everyone else doing it. Maybe I’ll learn that lesson when I get denied tenure.

    I agree with almost all of this. The reason for the outline was just to organize thoughts – I feel like sometimes the comments section gets a bit convoluted for new readers, and wanted to try and spell out where different parts of different arguments fit together. Much like the talk, I suspect it was not particularly useful to regular readers of the blog, but that the broader public might find it useful. As though the broader public reads the comments sections here.

    I thought this was a smart post about an important idea that wasn’t getting enough attention. Thanks for sparking the conversation.

Leave a Reply to jrc