Born-open data

Jeff Rouder writes:

Although many researchers agree that scientific data should be open to scrutiny to ferret out poor analyses and outright fraud, most raw data sets are not available on demand. There are many reasons researchers do not open their data, and one is technical. It is often time consuming to prepare and archive data. In response my [Rouder’s] lab has automated the process such that our data are archived the night they are created without any human approval or action. All data are versioned, logged, time stamped, and uploaded including aborted runs and data from pilot subjects. The archive is GitHub github.com, the world’s largest collection of open-source materials. Data archived in this manner are called born open.

Rouder continues:

Psychological science is beset by a methodological crisis in which many researchers believe there are widespread and systemic problems in the way researchers produce, evaluate, and report knowledge. . . . This methodological crisis has spurred many proposals for improvement including an increased consideration of replicability (Nosek, Spies, & Motyl, 2012), a focus on the philosophy and statistics underlying inference (Cumming, 2014; Morey, Romeijn, & Rouder, 2013), and an emphasis on what is now termed open science, which can be summarized as the practice of making research as transparent as possible.

And here’s the crux:

Open data, unfortunately, seems to be paradox of sorts. On one hand, many researchers I encounter are committed to the concept of open data. Most of us believe that one of the defining features of science is that all aspects of the research endeavor should be open to peer scrutiny. We live this sentiment almost daily in the context of peer review where our scholarship and the logic of our arguments is under intense scrutiny.

On the other hand, surprisingly, very few of our collective data are open!

Say it again, brother:

Consider all the data that is behind the corpus of research articles in psychology. Now consider the percentage is available to you right now on demand. It is negligible. This is the open-data paradox—a pervasive intellectual commitment to open data with almost no follow through whatsoever.

What about current practice?

Many of my colleagues practice what I [Rouder] call data-on-request. They claim that if you drop them a line, they will gladly send you their data. Data-on-request should not be confused with open data, which is the availability of data without any request whatsoever. Many of these same colleagues may argue that data-on-request is sufficient, but they are demonstrably wrong.

No kidding.

Here’s one of my experiences with data-on-request:

Last year, around the time that Eric Loken and I were wrapping up our garden-of-forking-paths paper, I was contacted by Jessica Tracy, one of the authors of that ovulating-women-wear-red study which was one of several examples discussed in our article. Tracy wanted to let us know about some more research she and her collaborator, Alec Beall, had been doing, and she also wanted to us to tell her where our paper would be published so that she and Beall would have a chance to contact the editors of our article before publication. I posted Tracy and Beall’s comments, along with my responses, on this blog. But I did not see the necessity for them to be involved in the editorial process of our article (nor, for that matter, did I see such a role for Daryl Bem or any of the authors of the other work discussed therein). In the context of our back-and-forth, I asked Tracy if she could send us the raw data from her experiments. Or, better still, if she could just post her data on the web for all to see. She replied that, since we would not give her the prepublication information on our article, she would not share her data.

I guess the Solomon-like compromise would’ve been to saw the dataset in half.

Just to clarify: Tracy and Beall are free to do whatever they want. I know of no legal obligation for them to share their data with people who disagree with them regarding the claim that women in certain days of their monthly cycle are three times more likely to wear red or pink shirts. I’m not accusing them of scientific misconduct in not sharing their data. Maybe it was too much trouble for them to put their data online, maybe it is their judgment that science will proceed better without their data being available for all to see. Whatever. It’s their call.

I’m just agreeing with Rouder that data-on-request is not the same as open data. Not even close.

89 thoughts on “Born-open data

  1. On the same topic, a friend of mine was complaining about this article (http://faculty.wcas.northwestern.edu/eli-finkel/documents/InPress_HuntEastwickFinkel_PSci_000.pdf), both for the poor graphs and the ‘open science’ description. If you go to the end of the paper and read the Open Practices paragraph, you find out that it got an open materials badge for putting their survey online but the data is not publicly available, nor was the study preregistered. Going full circle to the graphs, not only is their data not available, the graphs in the paper are of model fits and not actual data points. I don’t think it’s going to raise your opinion of Psych Science at all.

  2. Beall and Tracy’s article does not acknowledge any funding source, although perhaps they didn’t need any external money to get 24 undergraduates and 100 MTurkers to answer a very short questionnaire. Had the research been funded by (e.g.) NIH, they would have been obliged to share their data after initial publication, although I know of at least one case where an author of NIH-funded research has refused to comply with this, citing entirely spurious reasons and, in effect, inviting the researcher who is asking for the data to take the risks involved in a public escalation of the issue.

      • The risk of being seen, in public, to take on a researcher who might will be of far higher status than oneself, and who might, via their circles of influence, be in a position to damage the career of the person asking the awkward questions. Call me paranoid. :-)

    • The PLOS One paper recognizes funding from the Social Science and Humanitites Research Council of Canada (#410-2009-2458), the Michael Smith Foundation for Health Research (CI-SCH-01862(07-1) and a Canada Institute for Health Research new investigator award and operating grant (FRN: 123255)

      CIHR’s data sharing policy is consistent in spirit with open data but does not appear to require it: “Recognizing that access to research data promotes the advancement of science and further high-quality and ethical investigation, CIHR explored current best practices and standards related to the deposition of publication-related data in openly accessible databases. As a first step, CIHR will now require grant recipients to deposit bioinformatics, atomic, and molecular coordinate data into the appropriate public database, as already required by most journals, immediately upon publication of research results (e.g., deposition of nucleic acid sequences into GenBank). Please refer to the Annex for examples of research outputs and the corresponding publicly accessible repository or database.” http://www.cihr-irsc.gc.ca/e/46068.html#5.1.2

      However, the SSHRC does appear to require open data: “All research data collected with the use of SSHRC funds must be preserved and made available for use by others within a reasonable period of time. SSHRC considers “a reasonable period” to be within two years of the completion of the research project for which the data was collected.” http://www.sshrc-crsh.gc.ca/about-au_sujet/policies-politiques/statements-enonces/edata-donnees_electroniques-eng.aspx

      Neither funding agency makes an exception for people who aren’t so nice, who are not trusted friends, or who may disagree with your analysis or theory.

      • What would be interesting is to test how well or badly these rules work. I’m wondering if Andrew contacts the SSHRC and complains of Tracy / Baell’s not releasing the data what will happen next. Will the grant her a “bad faith” exception? :)

        Are these policies on open-data toothless or do they actually have some weight?

  3. Just to clarify– we have shared our data with every other researcher who has requested them (about 4-5 people, as far as I can remember). We chose not to share with you because you made it clear that we cannot trust that you will use the data in a good faith way. To me, good faith means taking all measures possible to give the people whose work you’re critiquing the opportunity to either respond to those critiques in the same outlet (ideally the same issue), and/or to provide a peer review of your critique before it is published. Good faith also means contacting authors before critiquing them in social media outlets like Slate, to make sure you get all the facts straight–something else you neglected to do. We obviously feel differently about this, but in my mind all this is part of what it means to be a scientist who supports principles of openness. Since you have demonstrated an unwillingness to support these principles as I see them, along with a willingness (even eagerness) to publicly critique our work without taking care to give us an opportunity to respond or correct errors, I made the decision that you are not a part of the open-science community of scholars with whom I would (and have) readily shared our data. -Jess Tracy

    • Jessica:

      I strongly recommend you follow Jeff Rouder’s policy and make your data publicly available. You may get some short-term gain by hiding your data from critics, but long-term I don’t think this serves scientific progress. Especially given that there may be many people, not just 4 or 5, who would be interested in seeing your data.

      One way to see this is like this: Anyone can read your published articles, not just your friends or people you trust or people who ask you politely. Anyone. Even people who might be critical of your work, they get to read your paper too. I think all that Rouder is suggesting is that the same policy be followed with the raw data (excluding settings where data must be restricted for reasons of confidentiality or security). To the extent your research is important, your data are important too, not just to 4 or 5 people but to everyone, even to critics, even to people who annoy you and who might present your work in an unfavorable light.

      Again, I know of no legal obligation for you to share your data with people who disagree with your regarding the claim that women in certain days of their monthly cycle are three times more likely to wear red or pink shirts. I’m not accusing you of scientific misconduct in not sharing their data. Maybe it was too much trouble for you to put your data online (that’s typically my own reason for not posting data; it just seems like too much effort to untangle the mess that’s in my computer directory), maybe it is your judgment that science will proceed better without your data being available for all to see. Whatever. It’s your call.

      • Andrew:

        I think you are totally missing her point. She will gladly share her data with a random person, even me perhaps without any credentials.

        I think what she is saying is she’s making an exception of *you* in particular as she thinks you have demonstrated, by your actions, that your analysis of her data will be in bad faith.

        I’m not saying whether she is right in attributing bad faith to you, but your response seems entirely orthogonal to what she was saying.

        • Rahul:

          As I wrote, one advantage of Rouder’s “born-open data” proposal is that data are shared with everyone, even with critics. Just as her paper has been published for all to read, even critics, even people who might annoy her etc.

          Look: Tracy’s not hurting me any by not sharing her data. She’s just saying that she doesn’t want her data out there in the public domain, for whatever reason.

        • Andrew:

          Yes, I agree about open data. I’m a fan of that strategy too, in general. People shouldn’t need to beg for data.

          OTOH, did you really write the Slate piece without talking to them first? If so, that sounds iffy. Like you say, not illegal or anything but still.

        • Rahul:

          I didn’t talk with Daryl Bem about it either. Nor did I talk with Nicholas Wade before posting my review of his book on Slate. It might well be that these articles would’ve been better had they included interviews with the people whose work I was discussing. I come from the traditions of science rather than journalism, and critiques in science are typically done in the context of published work rather than personal interviews. For example, in statistics it is common to write articles improving upon or criticizing previously published work, and this is done by referring to published papers; usually I don’t think the subjects of the criticism are contacted beforehand. So, given my background, I don’t consider this iffy at all. The key is that I’m responding to the published literature; this is not like a newspaper report where I’m responding to unsourced allegations and it makes sense to quote the people involved.

          That said, sure, maybe my articles would’ve been stronger had I interviewed Bem, Beall, Tracy, Wade, etc.

          I did contact David Brooks a few times, very politely, regarding his errors, but I got no useful response from him. But Bem, Beall, Tracy, Wade, etc., might have had more to say.

        • Andrew:

          Not speaking to Tracy probably had very little effect on your article quality, if any. And those articles seem plenty strong to me as they are. So I have no worries on that count.

          To me, it was more a matter of good form. I just assumed it was common courtesy to contact a fellow academic before you publish such a critique.

        • I don’t see where it’s iffy to criticize published articles anywhere and anytime and without any permission or feedback from the author.

          If the authors want to respond, they should do so, and they shouldn’t feel like they’re supposed to get special treatment, or some kind of kid-gloves pre-access to your criticism, etc.

          Beale and Tracy are free to write their own blog posts, suggest an article to a Slate editor, whatever. I bet Andrew would be happy to have them send him a link to their response. He’d probably even put it front page, and then he’d explain what he thought about that response… and the process would continue, and what would they want? Pre-access to his blog post responding to their response so they could negotiate its contents? Baloney.

          What Tracy and Beale want seems to be a back-door gentleman/ladies agreement that scientists will always softball each other for the mutual good of their funding and careers… it’s preposterous. Don’t want serious problems with your study discussed like your dirty laundry? Don’t publish seriously flawed work.

        • No one needs to ask for permission. But a heads up or a private discussion doesn’t hurt, in my opinion. I don’t see what good it does science by springing a surprise or keeping them out of the loop.

          It’s the difference between walking across the street to your neighbor to request him to tone down a noisy party versus calling the cops with a formal complaint right away.

          Yes, and I agree that was seriously flawed work.

        • They way I have dealt with this, is to cc the authors at the same time as the letter to the editor is sent to the journal – that provides the heads up with no risk of entertaining special treatment.

          There are down sides – some authors are actually insulted, some will try to influence the editor.

          In one case I followed up the first cc with an email later that I had found new problems the authors could not have known about at the time of publishing their paper – mysteriously the editor froze my submission the next day even though they had demanded revisions that I was still working on. That prevented the new problems from being discussed even though one of the authors had agreed the problems were real and important.

        • It may simply be a matter of different standards in different fields. In medicine and public health, if you have a criticism then you write a letter to the editor and the first time the authors hear about your critique is when they receive an email from the editor asking them if they want to respond to your letter. Having been on the receiving end of such letters before, I can see why some people might feel “blindsided” by them, or they might feel like they should have been contacted first before the critique is publicly aired (eg., email them first before submitting the letter to the editor, or before publishing a blog post etc).

          My impression is that in other fields e.g. economics these things are taken much less personally. Sure there are exceptions (e.g. Caroline Hoxby’s oddly vociferous response to Rothstein’s critique of her work) but by and large this is just seen as the normal academic back and forth that everyone is expected to develop thick skin against.

        • @A. Tasso

          Good point.

          Also: Is it typical to publish your criticism in a Journal different from the one in which the article you are criticizing appeared? I think that’s another interesting point here: The Tracy / Beall article appeared in Psychlogical Science. But I don’t think the critique by Andrew was submitted to the Editor of Psych Science.

          Not sure if Andrew tried sending it there first and they rejected it? In any case I’m still confused, which Journal was it whose info. Tracy wanted so badly from Andrew so as to contact the Editor in question. Was that Chance? American Scientist? Both?

        • Rahul:

          I thought I’d made this clear, but, just one more time: My point in writing about Bem, Beall and Tracy, etc., was not to correct the record regarding their ridiculous claims but rather to explore more general and interesting issues regarding statistical inference for small effects. Again, my Slate article and American Scientist paper were not just about Beall and Tracy’s article; they used the Beall and Tracy article and other published work to make a larger point. I also wanted to reach a more general audience (the readers of Slate and American Scientist), not just psychology researchers.

          And I did publish a related article in Perspectives in Psychological Science, which is the sister journal to Psychological Science. It was the second paper on the topic that I tried there; the first was rejected as being solely critical and not constructive enough.

          Finally, I’ve written about my failed attempt to publish a correction of a different, statistically-flawed article as a letter in the American Sociological Review.

          Sometimes I think the readers of this blog aren’t aware of all the work that I do.

  4. A problem with the sharing-data-upon-request approach is that these data are often quickly lost. I’ve written probably ten authors for data and only two actually had the data archived. All the other requests were denied because the data were on old hard drives or old computers that were thrown away, etc. etc.

      • Yes. And it’s not as if a 4GB memory stick needs a mortgage to buy, or a Dropbox account requires a lawyer to sign up for. Are we seriously expected to believe that researchers don’t make backups ? People who lose their data to hard drive crashes should expect to be treated little different from those who deliberately destroy it.

      • You don’t have to throw it away; just park it on a 8 inch (7inch?) disc that needs an obsolete operating system that has not been produced since the 1970s. It happened to a colleague of mine who wanted to include some unpublished dissertation data when he was writing a new paper in the late 1990s.

        He didn’t even need to write an email. He could just talk to himself.

  5. This reminded me that I made a plot of the data from the 2014 PLOS One paper (as provided by Tracy & Beall) that aims to clearly illustrate the rates of wearing red/pink conditional on conception risk (under varied definitions):
    http://dx.doi.org/10.6084/m9.figshare.1451384

    An interesting aside, in Tracy & Beall (2014) the idea is that this effects works when it is cold, but not warm. This is examined primarily with logistic regression. Actually the simple effect test for the cold day (in January) gives p = 0.06 (or p = 0.15 for the exact test, which can be conservative). Interesting, if you use the 10-17 period instead (as Gelman suggested as an alternative), you get p = 0.02 (or p = 0.04 for the exact test).

  6. I don’t want to get into a lengthy back and forth about this, but just to quickly reply to a couple issues here:

    (1) By not contacting us prior to publishing in Slate, Gelman’s article did suffer– from several mischaracterizations and explicit errors, all detailed in our reply, which of course did not receive the same viewership as his initial post since it was not included in Slate.com (see: http://ubc-emotionlab.ca/2013/07/too-good-does-not-always-mean-not-true/.)

    (2) I’m not comfortable publicly posting data from participants who did not provide consent to have their data made available in public repositories, and especially in cases where it is impossible to remove all potential identifiers (e.g., in a small sample from one university’s local subject pool). However, I am happy to share the data with any individual researchers who don’t appear to have bad faith intentions toward the work, or at least demonstrate a similar openness in sharing information about their work with me.

    (3) Off-handedly referring to published research as “seriously flawed” without providing any justification for such statements doesn’t add any value to this discussion, and, in my mind, brings the conversation to a particularly low level.

    • Jessica:

      Of course I have provided many justifications for describing your work, and Bem’s, and others, as seriously flawed. As I said to Richard Tol: You’re young, and it’s not too late for you to do better work. But defending the indefensible is not a good start!

      Setting all details of your days-of-the-month studies aside, I recommend you read this post. Your theories may be valuable; the trouble is that your data are so noisy as to have essentially no bearing on your theories. One option would be for you to do purely theoretical work, another option would be to study larger effects, yet another option would be to continue to study the topics you’re studying, but with much more careful measurements. Any of these would have a chance of making progress. But your strategy of loose theorizing with noisy data analysis, that’s just . . . hmm, I don’t want to say it will lead nowhere, because it might lead to important discoveries. It’s possible. It’s just, the data aren’t saying what you seem to think they’re saying, which is why you’ve been in the position of defending indefensible claims such as the ovulation-and-clothing stuff. In any case, you can do what you want, I just hope that you and your colleagues take seriously this sincere advice that I’m giving. All of us have highs and lows in our research, and it’s not too late for you to write off some mistakes and move forward.

      • It seems that if one’s initial contact with someone is to call their abilities into question somehow – it is very difficult for them to believe any further interactions are meant to cause anything but further harm. Worse this makes it difficult for them to actually attend to the substance of the original criticisms.

        I don’t doubt that Jessica believes you only mean to do further harm to her career and has not really grasped the substance of many of your criticisms. I am not being flippant here (as we have gone through this before) and it is or will be sad if a better way forward is not found.

        • Keith:

          Yes, sad but (likely) true. On the upside, the purpose of my work in this area is not to influence Beall and Tracy but rather to influence researchers more generally, and to develop statistical methods to handle these problems of inference for small effects. I’m giving Beall and Tracy free advice because, Why not? I wish them no ill and I’d like for them to have useful and productive careers. But really the main message is for others who do not have a personal stake in this example.

      • Andrew:

        One question I have is why were you so secretive about where you planned to publish you Tracy critique? What harm could it have done? Let’s assume Tracy did email the editor, perhaps the same reply she posted on her private website, wasn’t your critique still strong enough to stand on its merits?

        If we are in favor of openness everywhere in the academic process, why not in this aspect? Why do you have to hide where you are going to submit and then spring a surprise?

        Was it a principled decision, a pragmatic one or just a whim?

        • Rahul:

          I didn’t think it appropriate to give Beall and Tracy any special “in” on the article, and I didn’t really feel like having the editor of our article getting tangled with Bem, Beall and Tracy, and all the other authors of papers we discussed. Publication is a messy enough process as it is without getting lots of extraneous people involved. To get all these authors involved just seemed like a can of worms. I’ve published hundreds of articles (mostly in academic journals but some in magazines such as American Scientist), and pretty much never have the subjects of the articles been involved in the editorial process. So I saw no reason to start now. I did promise Beall and Tracy that I would tell the editor of their request, and I did follow through during the editing process by sending our editor the following message:

          Also, we wanted to let you know that, a few months ago, we were contacted by Jessica Tracy and Alec Beall, the authors of one of the papers we discuss in our article. They were upset at what we wrote and wanted to directly contact the editors of the publication where we would be publishing our article. We told them that we did not feel that it makes sense in the editorial process to give any special role to the authors of papers that we discuss. It is common practice for papers in statistics and research methods to cite and discuss, often critically, the work of others, and in general the authors of work being discussed do not necessarily get any privileged role in the editorial process. But we assured them that we would inform our editor of their concerns; hence this note.

          And I pointed the editor to our blog exchange with Tracy.

        • Andrew:

          The researchers who authored the work you are critiquing are hardly “extraneous people” in my opinion.

          “Messiness” seems another weak argument to me. Open data release is messy too, yet we want to do it, in spite of the messiness and the risk of opening cans of worms.

          How many of your “hundreds of articles” were a critique so damaging that it almost invalidates the entire work?

          Your critique itself is very sound and strong. And the Tracy / Beall work is fatally flawed. And they deserve to be critiqued.

          But to me, the strength of your critique (and the seniority of your position and the amazing reputation you enjoy) is the very reason to bend back over backwards and give Tracy / Beall the extra opportunity, wherever possible, to defend their work. And allowing them to contact your editor was one such chance.

        • Rahul:

          Maybe so, it’s just not the norm in statistics or political science or any academic field I’ve worked in, not do I recall it being done in any journalistic endeavor I’ve been involved in. Which is perhaps why the many other authors of papers I’ve criticized in many other places have not demanded any of this sort of access. Loken and I were pretty stunned by Tracy’s request, it was so alien to our experience.

        • Andrew:

          The Patent process gets one part of this right. They post all applications online during review. If you want to mention a flaw you can contact the Patent Examiner. Pretty transparent.

          Maybe Journals ought to start doing this. That would be a transparent way for people like Tracy to contact the editor without having to beg of you to tell them who you would be submitting the paper too.

          If we are all for data-openness why not have submission-openness too?

    • I am honored to quoted at length. Thank you, Andrew. A few points:

      1. One of my reasons for making data born open is that I believe that data must always be interpreted. The interpretation is where we make much of our intellectual contribution. Interpretation and analysis is where all the action is. It is the value-added part of the process. I strive to make sure everybody has the right to interpret the data I collect as they wish because (A.) I know that finding structure in data is really, really hard, (B.) there is more than one judicious interpretation of any set, and (C.) there is bound to be someone out there that has a more insightful interpretation of my data than I do.

      2. I think the proprietary view of data is harmful. Instead of ownership, we should be thinking in terms of stewardship. Are you a good steward for the data you have the privilege to collect? It is a privilege. I am not saying we don’t work hard or don’t deserve success we beget from our data. We do. Nonetheless, it is a hell of a privilege to be a scientist.

      3. The newest version of the paper is now In Press @ Behavioral Research Methods. The pdf is at http://pcl.missouri.edu/sites/default/files/r_1.pdf

      Best, Jeff

  7. I just want to point out that the APA Ethical Guidelines (such as they are) do seem to require researchers to share their data upon the request of “competent professionals” (or words to that effect).

    I guess there’s a loophole there.

    • 6.25 Sharing Data.

      After research results are published, psychologists do not withhold the data on which their conclusions are based from other competent professionals who seek to verify the substantive claims through reanalysis and who intend to use such data only for that purpose, provided that the confidentiality of the participants can be protected and unless legal rights concerning proprietary data preclude their release.

      • Loophole 2: “seek to verify the substantive claims through reanalysis and who intend to use such data only for that purpose”

        My guess is AG doesn’t really care about the substantive claims in question (not his field, after all) and wouldn’t have used the data only to shed light on them; he would have used the data to further his forking-paths argument. My reading (as a person claiming absolutely no expertise or standing) is that Tracy and Beall weren’t obligated by the quoted section to help him do that.

        • Yeah, what could they possibly learn from alternative analyses of their data, performed by people who are skeptical of their claims?</sarcasm>

        • I’m not saying their choice not to share the data was a good one; I’m just saying that the wording of that particular chunk of text doesn’t appear to obligate them to do so.

        • I recently had a discussion with a colleague over whether “reanalysis… and only for that purpose” implicitly includes “writing up that reanalysis and submitting it to a journal”, or whether the researcher asking for the data needs to stipulate (presumably at the time of asking for the data) that, should the reanalysis result in sufficiently interesting discoveries, it was likely that an article would be written, possibly without any further conversation with the original authors. We didn’t reach a conclusion, other than that that APA wording is suboptimal.

        • > should the reanalysis result in sufficiently interesting discoveries, it was likely that an article would be written
          The problem of selective publication (e.g. file drawer problem/p-value censorship) has not be learned!

          Of the 100 who sought reanalysis only those 5 that were “interesting” are brought to light.

  8. NIH should implement for real their requirements of publishing the research data from the studies they sponsor. That is: no open data access for the current study (on GitHub, which is the perfect sharing location; or at least on clinicaltrials.gov) = no continuation of your R01. If Columbia University or University of Missouri is the only sponsor, then the deal is up to the researcher, their consciousness, and the university. But the federal sponsor can do better in enforcing the good practices across the field.

    • +1

      So also, the journal editors. These are the guys who have the power to make this happen. No dataset posted online? Sorry, no article review till you do that.

    • > the federal sponsor
      They can but is it in their perceived bests interests to document how poorly used the money was by many of the researchers they choose to fund? Recall, at least until recently, almost all researchers got good press if they wanted it.

  9. I was one of the people who asked for and got the data promptly. They should get some credit for that, and more than 75% of the people have managed to do when I asked them for their data (in psycholinguistics). In psycholinguistics, the responses range from “no, I won’t give it to you”, “sorry I lost it” (actually, I do believe some of the people who say that), or (most common) no response (gives plausible deniability—you can always say, I never got the email).

    I should mention that in my opinion graduate students might be more unwilling to release data, as they could be afraid of the consequences for them if something doesn’t pan out (I’m not talking about LaCour here, just well-intentioned students doing their work).

    I also want to point out that Andrew has also refused to release the code associated with his paper (to me ;). The grounds were that it’s too messy, but still. So even the best people out there are doing a sub-optimal job. Jessica Tracy seems to be getting a particularly hard time of it, seems a bit unfair to me. Andrew does not post his code with his published papers (not in general; there are some cases, such as the WAIC paper that has code, I think). Why not? Surely if you are going to attack people on this count, you have to set an example.

    That said, one odd thing I found in the Tracy and Beall data was that the analysis reported in the paper is statistically significant only if you collapse columns for red and pink into one (redorpink, say); if you analyze red alone, there is no effect (which makes the titles of their papers misleading). In fact, the effect is driven by pink; if you analyze pink alone, that’s what’s driving the effect. So the paper could have concluded that pink is the color that signals whatever it is that it is supposed to. If they really had planned to collapse red and pink into one category, why did they record two separate columns? I bet it’s a post-hoc step; it’s what I would have done too in my pre-statistical-education life. Heck, in psycholinguistics (and probably also psychology), we make up the entire research question backwards after we discover something accidentally in the data ;)

    A final point is that by refusing to share the data with Andrew, and by refusing to post it publicly, Jessica Tracy *could* be in violation of some official-looking data sharing requirements; see here:

    http://www.science.gc.ca/default.asp?lang=en&n=2BBD98C5-1

    I quote:

    “Method of data sharing:Investigators are required to either (1) deposit data in relevant subject or institutional repositories; or, (2) where there are no repositories hold the data locally, and make it available through a web-based presence; or (3) retain data so that upon request, other researchers can have access to data.”

    There is of course a loophole, as people have pointed out:

    “There are also a number of common exceptions that are often included in data sharing policies:

    Privacy and confidentiality of data: The privacy of individuals who participate in research and the confidentiality of the data must be protected at all times. Data intended for broader use must be free of identifiers that would permit linkages to individual research participants and variables that could lead to deductive disclosure of the identity of individual participants. In some cases where data cannot be stripped of identifiers, for example longitudinal studies that collect data over a period of time and must compare data points, data may be exempted from the data sharing requirements or data sharing may be qualified.

    Intellectual property: Policies may permit delays in sharing research data for a period of time, in cases whereby institutions or researchers are applying for patents or developing new applications based on that data.

    Traditional knowledge: Where local and traditional knowledge is concerned, rights of the knowledge holders shall not be compromised.

    Sensitive data: Where data release may cause harm, specific aspects of the data may need to be kept protected (for example, locations of nests of endangered birds or locations of sacred sites, or data related to national security)”

    It would be really great if Rouder’s approach simply became the norm. Just yesterday I read about a meta-analysis of ERP studies in which not a single numerical value was presented in the summary table, since the papers did not provide enough information to extract that crucial detail. So the meta-analysis was limited to a sort of qualitative review of “effect was present”, or “effect was absent”, which everyone knows is totally misleading.

    • Shravan:

      You write, “Surely if you are going to attack people on this count [not releasing data], you have to set an example.” I’m not attacking anyone. All I said was that I gave Beall and Tracy the advice that, instead of sending me their data, they should just post it where all could see it. As I wrote above, I don’t think they’re doing themselves or anyone else any favors by releasing their data selectively. But it’s their call. No attacks happening here. I’m working to make my own analyses more reproducible. But it is work!

      • Andrew, you may not have intended it to be an attack, but from Jessica Tracy’s responses it seems it sure feels like it to her. To be honest, if I were her, I would feel under siege too (but I would not need to get into an argument about releasing my data online as I do it already).

        I do agree that they just put the data up online on the home page and be done with it. I don’t understand what the big deal is, and why she’s invoking this limp argument about protecting the identity of local participants. There’s nothing in the data that could allow me to find out who we are talking about, and if there is you can strip it out. After all, the final analysis is a generalized linear model, there’s only so many columns of information you need.

        I speculate that’s just a cop-out on her part; it’s hard to concede a point after you have taken a stand. It feels like a defeat to say, you know, you’re right, I should just post it online. It’s what we are trained to do as scientists; stick with a position and stay with it no matter what.

        Andrew, why not hire an assistant (grad student) to curate your data for public consumption? That frees you up for the more demanding and less tedious work. He/she will probably find your errors for you too, nothing like a fresh pair of eyes to look at old code ;)

        • > we are trained to do as scientists; stick with a position and stay with it no matter what.
          Nothing less scientific than that – being scientific is bending over backwards to find out how you might be wrong allowing all qualified others to bend your work anyway they see fit. The qualified others does not refer to restriction but rather background knowledge presumed.

          The training, you are no doubt referring to, is building and maintaining an academic reputation – its academic training.

          (Shravan, not disagreeing with you, just rewording the argument I think you are alluding to for stay with it no matter what.)

        • Yes, yes, I know all that :)

          I just don’t know many people who do that (I do know a few). Try listing out people (any field) who have published papers falsifying a position they stand for. There won’t be many. This attitude becomes ingrained and kind of spreads to everything.

        • The whole idea of a scientist having “a [falsifiable scientific] position they stand for” is so strange to me. In mathematics, people often have “conjectures” which they try to prove, but if in the process they find a counterexample — well, the counterexample is worth publishing (unless of course it’s so trivial that they should have thought of it before making the conjecture), since counterexamples are usually helpful in understanding the subject.

          In mathematics, there is a longstanding tradition of disseminating “preprints” before a result is formally published. One consequence of this practice is that often mistakes are caught before formal publication — and everyone benefits from that. This seems to be an example of a positive attitude that has become ingrained. It seems to be spreading a little to other fields, but there is still a long way to go.

        • Martha:

          If one thinks of mathematics as experiments performed on a diagrams or symbols rather than empirical objects, it is much much easier for many others to replicate almost without effort.

          I do think that is the reason for most of the difference.

        • Keith,

          I find it hard to conceive of mathematics as “experiments performed on diagrams or symbols.”

          However, I do believe that it is much easier to check the reasoning involved in a (well-exposited) mathematical argument than to replicate an experiment. This difference between math and much science probably contributes to the difference in willingness to disseminate research informally before publication.

          Another relevant difference is that many publications in fields other than mathematics limit the size of articles, so that details have to be omitted (or made available elsewhere). Many papers in science are what in math might be called “research announcements,” with full details left to a larger publication. (However, with the ArXiv, the practice of publishing research announcements in math seems to have died out.)

        • It is the difference between intent & outcome perhaps.

          Andrew does not desire to “attack” Tracy / Beall, Brooks, etc. but his actions get perceived as attacks by others including Tracy etc.

      • There was this cool paper by Chung et al on variance covariance matrices and specifying priors for the correlation matrix. Dec 2013 or so… wow, that was a long time ago. Time’s just rushing by, and I’ll be dead before I even know what hit me!

  10. Since when is scientific misconduct the same as legal misconduct? It seems apparent that they don’t want to share their data because they don’t want to get criticized more. That is scientific misconduct in my opinion.

      • Both Andrew’s and Jessica Tracy’s arguments can be read as “ethical” ones in the sense that they are about the values that maintain communities of practice. Tracy is saying that she wants to work in a community that only requires her to share her data with people she trusts. Andrew is saying we should either trust everyone (in one sense) or not demand that we should trust (in another sense) anyone. Both are articulating constitutive values, values that shape who we are when we call ourselves scholars. They construct a research identity, a scholarly persona, a scientific ethos.

        • Nice.

          >free to ignore the ones you don’t think are serious
          At Oxford, it was considered a breach of academic responsibility to ever ignore another faculty member.
          Having not ignored them, you could then choose to disregard the concerns they raised ;-)

          I have encountered similar considerations as you raise here in what are called working styles. One of these working styles is a strong preference for being inclusive, co-operative and non-confrontational – the most important outcome being every one getting along and having an opportunity to contribute.

          As one of my directors (fortunately not for long) put it – what’s important in research is not whether people are talented but whether they get along. Took me a some time to realize they were not joking.

  11. Interesting thoughts all… and I will stay out of the personal animosities (which I am clueless about). But it does seem to me that this is largely an “academic” debate that sounds like academia is a world unto itself. In reality, we are individual agents with careers that depend on external pressures – and those pressures reward overstating findings, making replication difficult, and penalize humility and modesty. I believe there is some difference between fields, with the traditional sciences being more cautious than social sciences. This is probably due to the training that scientists get and the academic culture they train in. But I think that is changing as media and public perception are affecting all of us in similar ways.

    I am afraid that the reality is that truly open data does not pay. And, less than open data pays too well. Unless we change those incentives, I believe little will change in substantive ways. And, to make it change largely depends on the willingness of those at the top (editors, researchers with established reputations, etc.) to force change. It would not be difficult. An editor of a prestigious journal merely needs to announce that articles will not be published without the data being publicly available, that the data will be hosted on the journal’s website, and that the submitted manuscripts will be made available for all to comment on for a period of 3 months. After which, the editors will make a further decision about how to proceed. Yes, there are some messy details, but certainly we are capable of dealing with those – it is easier than most of the academic work we do.

    But it won’t happen because the incentive for those at the top is not to change the system which helped get them there. The risks and costs are too obvious and the payoffs distant, ambiguous, and fraught with further problems (like having to clean up your data so others can actually try to replicate, dealing with all the replication attempts, etc.).

    To perhaps oversimplify with a hypothetical example: a department at a prestigious university has a choice between two applicants coming up for tenure, but there is only one slot. One candidate is flamboyant, gets media attention, has good training, but their work is never replicated (nobody else has their data) and there are reasons to suspect it may not be right (such as the garden of forking paths or perhaps worse). The other candidate is understated – doesn’t get published as much because they don’t make attention-getting claims, spends a lot of their time making their data readily available to others, and then more time revising their work based on what feedback they get.

    In today’s real academic environment, who do you think will be promoted? And, what do you think can be done to change that outcome?
    Sorry to be pessimistic, but this has been my experience. And I am tired of sending requests for data, clarification, etc. to authors and getting no responses. I have much better success with authors at government agencies than in academia – and that is really my point. Incentives are clearly different.

    • ++

      I actually meet a Dean once (7 years ago) who stated their first priority in recruiting faculty was to make sure that candidates that understated their work never got hired – given the current context I would have to admit they were doing their _job_.

  12. “good faith means taking all measures possible to give the people whose work you’re critiquing the opportunity to either respond to those critiques in the same outlet (ideally the same issue), and/or to provide a peer review of your critique before it is published. Good faith also means contacting authors before critiquing them in social media outlets like Slate, to make sure you get all the facts straight–something else you neglected to do. ” – Jessica Tracey

    That is a strange definition of “good faith”. Surely criticising each other’s work is part of our job as scholars. That’s definitely how we think about it in economics. Getting your work critiqued by other scholars – yes, even scholars who didn’t email you to discuss it first – is good for you. It makes your work better in the long run and it means that nobody is mislead by your work now.

    Step back and consider a world in which we were all forced to run our academic critiques by the people being criticised. The incentives for the scholar being criticised are terrible – they can easily create a long, drawn-out private discussion that seriously delays the day when the criticism ever reaches the public or the rest of the research community. They might do this not because they are evil but because admitting they made bad mistakes is embarrassing and we all subconsciously want to avoid it. But we need to prevent this subconscious urge from having any power. The longer this private discussion goes on, the longer the rest of the community is potentially being mislead by the results which stand uncriticised.

    For the record, I’m a junior scholar whose work often uses other people’s (publicly available) data. If I’m confused by something in their data or can’t replicate their basic results on their data, then I do email them to see if we can figure out what’s going on before I post my work online. This helps to save me embarrassment if my critique or failure to replicate is based on a mistake, which I’m very worried about as a junior person. But I’m not *obligated* to do that. If my critique is based on dumb mistakes and I don’t bother to ask the original authors for help figuring out what’s going on, then I’ll get publicly embarrassed by the original authors, which would serve me right. If my critiques are solid then the authors should engage with me and take it seriously, and it does not matter whether or not they have ever heard of me before. A public record and public dialogue about these disagreements is not a bad thing and certainly not indicative of “bad faith”.

    • Also, why I always disregarded advice to approach the speaker privately after their talk – if I am right far more likely someone in the audience will learn something than the speaker (see Andrew’s comments on this) and if I am wrong I get that recalcitrant experience of reality to learn to ask better questions.

      (Maybe not good career advice for early researchers though, asking questions of those who they later they may have to ask for jobs, references from.)

      • Keith,

        One of the odd cultural things about economics is that asking questions in seminars is not just tolerated but expected, from senior faculty, junior faculty and even graduate students. Participating in seminars (not just sitting there, but actively engaging) is actually respected in the field, at least in my experience. I mean, no one likes a jerk, but many researchers specifically want to go give seminars in departments with engaged and critical seminar participants. You know, because they are challenged and they learn stuff that way. It is one of the cultural aspects about economics as a field that I really like.

        • jrc:

          As an aside, my director in Toronto was a health economist and told me the first priority of my job was to ask questions in seminars but many others warned me not to do that and/or stated they choose to ask only privately after the talk.

          The sign of a good researcher is they liked to be asked difficult questions because they actually understand how that will help them in the long run.

        • In the ideal world the academic establishment is there to further understanding. In the real world, a big goal is to further careers.

          Therein arises the differential attitude towards hard questions.

      • Remember this conflict originated in a posting on Slate, not a scholarly paper or presentation nor addressed to scholarly audiences.

        I’m sure it was not deliberate, but it was certainly humiliating and degrading to the targets.

  13. Rouder “lab has automated the process such that our data are archived the night they are created without any human approval or action”

    This needs clarification. Obviously, the data publication needs an approval from the person who produced the data – the human participant. Either the subject is asked for approval and some datasets may not be published if the person disproves or Rouder lab does not ask for data publication consent and there is an ethical issue or subjects who do not provide the consent are excluded from the participation and the sample may be a biased selection. In any case this is especially problematic for clinical and developmental psychology.

    I think this “easy open data” meme is harmful. Good open data needs documentation that goes beyond the publication. This requires extra effort that should be planned for.

    • Usually (at least in psychology), subjects sign a consent form before any data is collected. Consent forms typically have to have a statement about how the data will be used, including protecting the subject’s identity. However Rouder describes his data anonymization process, subjects are apparently satisfied enough that they sign the consent form and produce the data. If the consent form isn’t specific enough, or it turns out the data is not anonymized enough, my guess is that Rouder would be in trouble with his IRB.

      • Alex:

        Yes, this is one thing that bothered me about Jessica Tracy’s response when she wrote:

        I’m not comfortable publicly posting data from participants who did not provide consent to have their data made available in public repositories, and especially in cases where it is impossible to remove all potential identifiers (e.g., in a small sample from one university’s local subject pool). However, I am happy to share the data with any individual researchers who don’t appear to have bad faith intentions toward the work, or at least demonstrate a similar openness in sharing information about their work with me.

        She cited confidentiality concerns for not sharing the data publicly or with critics, but she was not bothered by confidentiality when non-critics asked for the data. Had the study been “born open,” i.e. participants had been told ahead of time that their anonymized responses would be posted online, none of this would’ve been a concern.

  14. And the data can be images too. See blogs.discovermagazine.com/neuroskeptic/2015/06/16/data-duplication-in-25-of-cancer-biology-papers for discussion of a paper where authors are not always careful which images they reprint.

  15. Dear Andrew,

    It is mandatory to share raw research data for any researcher affiliated to any of the Dutch universities and/or affiliated to any of the Dutch research institutes which are endorsing the ‘The Netherlands Code of Conduct for Academic Practice’ ( http://www.rug.nl/about-us/organization/rules-and-regulations/algemeen/gedragscodes-nederlandse-universiteiten/code-wetenschapsbeoefening-14-en.pdf ).

    “Principle 3 Verifiability. Presented information is verifiable. Whenever research results are published, it is made clear (..) how they can be verified. (..). 3.3. Raw research data are stored for at least ten years. These data are made available to other academic practitioners upon request, unless legal provisions dictate otherwise.” (page 8)

    There is also a recent case from the University of Groningen ( http://www.rug.nl ) where two researchers have been found guilty of violating the rules of research integrity because they were refusing to share raw research data with other scientists. This case is covered by science journalist Frank van Kolfschooten in a recent article in the Dutch newspaper NRC ( http://www.nrc.nl/nieuws/2015/07/01/universiteit-integriteit-in-geding-bij-taalfoutonderzoek/ ).

    The researchers in question are Dr Anouk van Eerden and Dr Mik van Es. They were unwilling to share raw research data from their PhD thesis to other Dutch researchers, Peter-Arno Coppen of Radboud Univerity Nijmegen, Carel Jansen of RUG and Marc van Oostendorp of Leiden University. They filed a complaint to RUG when Dr Anouk van Eerden and Dr Mik van Es continued with their refusal to share the raw data. A Committee of RUG has decided that the allegations were founded. Please note that both Dr Anouk van Eerden and Dr Mik van Es had promised in public, during the defence of this thesis, that they should ‘act in accordance with the Netherlands Code of Conduct for Scientific Practice’. Dr Anouk van Eerden and Dr Mik van Es are not anymore affilated to RUG, so RUG was unable to punish them. Their point of view can be found at http://basaleschrijfvaardigheid.blogspot.nl/ (in Dutch).

    The “ESF-ALLEA European Code of Conduct for Research Integrity” ( http://www.esf.org/fileadmin/Public_documents/Publications/Code_Conduct_ResearchIntegrity.pdf ) also lists rules in regard to sharing raw research data.

    * page 6. “1.4 Good Research Practices. (…). 1 Data. All primary and secondary data should be stored in secure and accessible form, documented and archived for a substantial period. It should be placed at the disposal of colleagues.”

    * page 10/11. “2.2.3 Integrity in science and scholarship. Principles. (…). These are principles that all scientific and scholarly researchers and practitioners should observe individually, among each other and toward the outside world. These principles include the following: (…) open communication, in discussing the work with other scientists (…). This openness presupposes a proper storage and availability of data, and accessibility for interested colleagues.”

    See als http://www.eur.nl/researchmatters/research_data/ for backgrounds about the policy of data management at Erasmus University Rotterdam (The Netherlands).

Leave a Reply to Joachim Cancel reply

Your email address will not be published. Required fields are marked *