Skip to content
 

Bertrand Russell goes to the IRB

Jonathan Falk points me to this genius idea from Eric Crampton:

Here’s a fun one for those of you still based at a university.

All of you put together a Human Ethics Review proposal for a field experiment on Human Ethics Review proposals.

Here is the proposal within my proposal.

Each of you would propose putting together a panel of researchers at different universities. You would propose that each of your panel members – from diverse fields, seniority levels, ethnicities and such – would submit a proposal to his or her ethics review board or Institutional Review Board for approval, and each of the panellists would track the time it took to get the proposal approved, which legitimate ethical issues were flagged, which red herring issues also held things up, and how long and onerous the whole ordeal was.

Still in your proposal, you would then propose gathering the data from your panellists and drawing some conclusions about what sorts of schools have better or worse processes. Specific hypotheses to be tested would be whether universities with medical schools were worse than others because medical ethicists would be on the panel, and whether universities with faculty-based rather than centralised IRBs would have better approval processes.

You would note that members of your panels could ask their University’s HR advisers to get data on the people who are on the IRBs – race, gender, ethnicity, area of study, rank, age, experience, time on panel, number of children, marital status, and sexual orientation (though not all of those would be in each place’s HR database); you’d propose using these as control variables but also to test whether a panel’s experience made any difference and whether having a panel member from your home Department made any difference. It would also be interesting to note whether the gender, seniority, ethnicity and home department of the submitter made any difference to the application.

End of the proposal-within-the-proposal.

Now for the fun part: each one of you reading this is a potential member of a panel for a study for which nobody has ever sought ethical approval, but which will be self-approving in a particularly distributed fashion: The IRB proposal to be tested is the one I’ve just outlined. Whichever of you first gets ethical approval is the lead author on the paper, is a data point, and already has the necessary ethics approval. Everybody else, successful or not, is a data point.

This is just the greatest. You can only do this sort of study if you have IRB approval, but the only way to get IRB approval is . . . to do the study!

This is related to other paradoxes such as: I can do nice (or mean) things to people and write about what happens, but call it “research” and all of a sudden we’re in big trouble if we don’t get permission. Crampton’s idea is beautiful because it wraps the problem in itself. Russell, Cantor, and Godel would be proud.

16 Comments

  1. BenK says:

    Excellent and elegant.

    A related conundrum is how to fund the human subjects protection office. In cases I’m familiar with, human subjects protection determines the level of review required – including whether the project involves human subjects and/or is research. The office is then funded based on work-load (that is, number of protocols it deems it must review and degree of scrutiny) and the chief is paid in part based on the number of direct reports he/she has. This is obviously a self-licking ice cream cone. Of course, making it independent of workload isn’t viable. Pay based on speed of review is just as conflicted. Taking away the power to determine what is covered is really the only sensible way to resolve the conflict of interest. No regulator should be permitted to determine their own jurisdictional boundaries.

  2. psyoskeptic says:

    If you conducted the proposed experiment then you don’t need ethics to do this study because you’re just recording data from a process that would happen anyway. It would fall under observational research. Therefore, kill some more birds with this stone and tie it to replication projects. Propose the data collection to the replication projects currently ongoing. Another feature of using them is that the research is known to have had prior approval before starting.

    Under some systems you don’t even need to conduct the study because introducing variables that wouldn’t be outside the normal course of events in classrooms or workplaces is permitted without approval.

    • Fred says:

      pysoskeptic is correct. In many institutions, IRB approval would not be required to perform this research. At my institution, it probably would and, indeed, it’s probably a good idea for all researchers to go through the process as soon as they have thought through their methodology. The only studies having to do with human beings that need not bother with our IRB process are those that rely exclusively on publicly available archival data. But in this case the proposed study would not be deemed research on human subjects. It is instead directed at the assessment of an organizational process, using only archival data on HSs, and, therefore, falls in the exempt category (so long as the HR department has agreed to provided the required data). We have an online process for IDing exempt research, in part because journals and funding agencies often require evidence of IRB authorization. But it merely requires the researcher to logon, click four boxes, and provide an electronic signature. The approval is automatically granted, if the researcher clicks the right boxes, and forwarded by e-mail to the researcher. If, on the other hand, researchers indicate that the research is not exempt, the online app will walk them through submission of the information the IRB needs to review their proposals. I think IRBs often get a bad rap, in part because at the outset many of them didn’t know what they were doing and tended to overreach. I imagine that most of these problems have been addressed at most institutions.

  3. Rahul says:

    Isn’t the recursiveness of this sort always going to arise when you try to approach a body meant to rule on others to rule on itself? Sounds too contrived.

  4. Clyde Schechter says:

    @BenK

    Your experience is rather different from mine. I served on the IRBs of two different institutions, for about a decade at each. At one of those, the volume of protocols reviewed more than doubled over that period of time, with *no* increase in funding for the IRB (other than the cost of living staff salary adjustments that were common then). The other institution’s IRB’s staff was increased with growing volume of studies, but not proportionally so. (Then again, this also occurred while new on-line technology for filing and managing protocols was introduced.) While I was not privy to any of the discussions on how to fund the IRBs, it seems that they were treated the way you would expect any institution or business to treat a function that does not contribute directly to the bottom line: give it the minimum it needs to avoid bringing down the profitable side of the system.

    Federal statutes and regulations define the minimum domain of jurisdiction for IRBs. Some institutions have expanded that, but at least at my two institutions, those decisions were not made by the IRB but by other parts of the administration and the office of legal counsel. I agree that no regulator should be permitted to determine their own jurisdictional boundaries. But, in my IRB experience, that hasn’t happened.

    @ The Post In General
    Very cute! But, seriously, there should be more research into the practices of IRBs and their actual outcomes in terms of protection of human subjects. On the other hand, it is difficult to do. One person’s serious ethical issue is another person’s red herring.

    When I was first conscripted to join my first IRB (that’s right, I did not consent to my participation), my attitude towards ethical review was that this was a bunch of people who needed to get a life and stop interfering with good and important research. After a very short time on the IRB I came to realize that the IRB bent over backwards to work with researchers to make their protocols ethically sound when there are real problems. I should also add that I was truly shocked to see how many very dangerous (threats to life and limb) research protocols were submitted with little or no thought to minimizing the dangers to the extent possible, nor to honestly telling potential subjects what they were getting themselves into. Over the decades, the situation has improved considerably, although such protocols still appear from time to time.

    • Rahul says:

      Can you give some examples of the very dangerous research protocols? I’m curious to know what sort of things researchers naively submit.

      • Clyde Schechter says:

        Protocols that threaten life and limb are not uncommon in health care research: it’s in the nature of the subject matter. What was distressingly common was the frequency with which the investigators would conceal from participants the fact that a drug being administered or a procedure being performed as part of the research would carry the risk of life-threatening adverse effects. Nowadays that is infrequent–possibly because the IRBs have effectively policed that.

        I do recall, in addition, several protocols that entailed risk to life and limb in ways that could not be justified as part of the risks of medical treatment. One example was a neuroscientist who wanted to do detailed studies of how alcohol impairs driving ability. The proposal was to have subjects come to the lab, get them drunk (which the investigator would verify both with neurologic testing and a blood alcohol level!) and then have them drive a real car on real streets under observation and with various electrophysiologic measures being taken continuously. I should add that this investigator was seriously outraged when the IRB told him that he could not do this study.

        Another example I vividly remember was where an addiction researcher wanted to obtain PET scans on heroin addicts while they were high. The protocol proposed to bring them into the laboratory, ask them how much heroin they injected per day, and then give them a that dose prior to doing the PET scan. The problem with this is that street heroin is of very variable purity, and giving an actual dose equal to what the person thinks they are getting on the street carries a very high risk of being a lethal overdose. You would think that somebody pursuing a career in addiction research would know that, but he didn’t. This researcher, at least, was actually grateful to the IRB for pointing out this problem.

        • Rahul says:

          Wow. Your two examples are mind boggling. Seriously, it’s hard to believe these are professionals.

          Even doctors concealing a drug seems so 1950’s. Didn’t know they would still consider such a study.

    • Andrew says:

      Clyde:

      You write of “very dangerous (threats to life and limb) research protocols,” and I think we can all agree that some review of such protocols is a good idea. But the discussion here is about social research such as surveys which may be wasteful and may be unethical but which involve no threats whatsoever to life or limb.

      • Clyde Schechter says:

        I certainly agree that most social research is, at worst, wasteful and maybe unethical in unimportant ways. But even social research can be dangerous in some circumstances. Consider a study of battered women. If there aren’t adequate precautions to safeguard the anonymity of the participants and their whereabouts, their lives and health could be put in danger.

        Situations like that will be uncommon in social research, but not non-existent. So the question is how to identify those for special scrutiny. Just as BenK says that no institution should be the judge of its jurisdictional boundaries, I say that no researcher should be the judge of the safety of his or her research. Independent review, properly conducted, is, in my opinion, the way to go. It needn’t be burdensome.

        Now, the human subjects research laws were clearly drawn up with biomedical research in mind. Unfortunately, it doesn’t say that in the statute itself. Still, there is nothing in the law or regulations that stops IRBs from providing a simplified, expedited review process to research that obviously poses no risk. In fact, the regulations explicitly set out the conditions under which that is permissible, and nearly all social research will satisfy those conditions. If an institution’s IRB is not doing that, this is a matter that the faculty should take up with the IRB chair and the Dean of the institution, or bring to the attention of the faculty senate. There is requirement that social research be reviewed with the same scrutiny as risky biomedical research. There is also nothing that prevents an institution from having separate IRBs for different disciplines.

        By the way, although the study described in the initial post is definitely social research, one of the things I really like about this blog is that it draws participants from many fields and disciplines, even the physical sciences. So I didn’t feel it was out of place to talk about IRBs from the biomedical perspective here.

      • Rahul says:

        I think we’ve been quite successful at regulating the dangerous to life & limb surveys.

        Social science surveys seem the focus now because there’s quite a bit of gray area or at least questionable attempts here. e.g. The recent Stanford / Dartmouth study that mailed 100,000 allegedly deceptive election mailers. Or that racial discrimination study with a fake student emailing Andrew for an appointment.

        At the very least, Soc. Sci. surveys need to be policed for wastefulness. Unlike lab rats, there’s a risk that we annoy the general population so much that people stop responding to surveys.

        • Andrew says:

          Rahul:

          I agree that social surveys can have issues (as I wrote above, “they may be wasteful and may be unethical”). Perhaps the IRB is a reasonable way to regulate these, perhaps not. After all, it’s not illegal to send people deceptive emails so it’s not clear that research following such practices should be banned. Perhaps this is just an unavoidable hassle of modern life, an unfortunate side effect of the ease of cheap communication. Sure, it’s too bad that such researchers are poisoning the well of social trust, but that’s done all the time in non-research settings. One solution that’s been suggested for spam is to charge a small amount, such as one cent, per email. I don’t know how technologically feasible that is, but it would reduce the incentives for spamming. As for the Dartmouth study, again, it’s a fine line because such activity as part of campaigning might be considered ok (setting aside the issue of the use of the state seal—if that’s illegal I assume the issue can be handled with the normal legal process). Considering those examples, the role of the IRB is not so much to protect research subjects but rather to protect the image and, thus, the longer-term health, of the social sciences.

          All the above issues are important. I just think there’s zero (if not negative) benefit to the use of dangerous medical experiments as a baseline. Yeah, sure, there are edge cases (such as the issue of confidentiality in a survey of battered women) but these are far from typical, and we get a lot of IRB hassle for a lot lot less than that.

          • Rahul says:

            Well, it is not illegal to plagiarize either (I think). Yet, we as Universities do proscribe that, correct?

            It seems logical that Universities should not encourage deceptive emails sent by their employees. If only, to protect their reputation, perhaps.

            Whatever may be legal during campaigning, must we as Universities stoop to the same low level?

  5. D.O. says:

    If the subject-matter you are dealing with is not inherently dangerous, like in sociology, a better method to control unethical research is post hoc. Just as it happened in the Dartmouth study.

  6. Junk Science says:

    RE: the protocol includes “ask their University’s HR advisers to get data on the people who are on the IRBs – race, gender, ethnicity, area of study, rank, age, experience,” thus the IRB members would need to have received informed consent, as they are subjects in the study (as opposed to the corporate body of the IRB); thus likely injecting a data bias as the IRB participants know that speed/delay is being recorded. They might work faster to produce a decision. Alternatively one could ask for a waiver of informed consent, but such an IRB application would add a different bias to the speed/delay; the IRB decision process being measure is of an anomaly study that asks for IRB waiver of informed consent. …

Leave a Reply