Pre-results review: Some results

Aleks Bogdanoski writes:

I’m writing from the Berkeley Initiative for Transparency in the Social Sciences (BITSS) at UC Berkeley with news about pre-results review, a novel form of peer review where journals review (and accept) research papers based on their methods and theory — before any results are known. Pre-results review is motivated by growing concerns about reproducibility in science, including results-based biases in the ways research is reviewed in academic journals.
Over that past year, BITSS has been working with the Journal of Development Economics to pilot this form of peer review, and we recently shared some of the lessons we learned through a post on the World Bank’s Development Impact blog. In a nutshell, pre-results review has helped helped authors improve the methodological quality of their work and provided an opportunity for earlier recognition – a particularly important incentive for early-career researchers. For editors and reviewers, Pre-results Review has been a useful commitment device for preventing results-based publication bias.
I’m attaching a press release that details the story in full, and here you can learn more about our Pre-results Review collaboration with the JDE.

I don’t have time to look at this right now, but I’m forwarding it because it seems like the kind of thing that might interest many of you.

Pre-results review could solve the Daryl Bem problem

I will lay out one issue that’s bugged me for awhile regarding results-blind reviewing, which is what we could call the Daryl Bem problem.

It goes like this. Some hypothetical researcher designs an elaborate study conducted at a northeastern university noted for its p-hacking, and the purpose of the study is to demonstrate (or, let’s say, test for) the existence of extra-sensory perception (ESP).

Suppose the Journal of Personality and Social Psychology was using pre-results review. Should they accept this hypothetical study?

Based on the above description from BITSS, this accept/reject decision should come down to the paper’s “methods and theory.” OK, the methods for this hypothetical paper could be fine, but there’s no theory.

So I think that, under this regime, JPSP would reject the paper. Which seems fair enough. If they did accept this paper just because of its method (preregistration, whatever), they’d open the floodgates to accepting every damn double-blind submission anyone sent them. Perpetual motion machines, spoon bending, ovulation and voting, power pose, beauty and sex ratio, you name it. It would be kinda fun for awhile, becoming the de facto Journal of Null Results—indeed, this could do a great service to some areas of science—but I don’t think that’s why anyone wants to become a journal editor, just to publish null results.

OK, fine. But here’s the problem. Suppose this carefully-designed experiment is actually done, and it shows positive results. In that case they really have made a great discovery, and the result really should be publishable.

At this point you might say that you don’t believe it until an outside lab does a preregistered replication. That makes sense.

But, at this point, results-blind review comes to the rescue! That first Bem study should not be accepted because it has no theoretical justification. But the second study, by the outside laboratory . . . its authors could make the argument that the earlier successful study gives enough of a theoretical justification for pre-results acceptance.

So, just to be clear here: to get an ESP paper published under this new regime, you’d need to have two clean, pre-registered studies. The first study would not be results-blind publishable on its own (of course, it could still be published in Science, Nature, PNAS, Psychological Science, or some other results-focused journal), but it would justify the second study being published in results-blind form.

You really need 2 papers from 2 different labs, though. For example, the existing Bem (2011) paper, hyper p-hacked as it is, cannot in any reasonable way serve as a theoretical or empirical support for an ESP study.

I guess this suggests a slight modification of the above BITSS guidelines, that they change “methods and theory” to “methods and theory or strong empirical evidence.”

Methodology is important, but methodology is not just causal identification and sample size and preregistration

In any case, my key point here is that we need to take seriously these concerns regarding theory and evidence. Methodology is important, but methodology is not just causal identification and sample size and preregistration: it’s also measurement and connection to existing knowledge. In empirical social science in particular, we have to avoid privileging ill-founded ideas that happen to be attached to cute experiments or identification strategies.

28 thoughts on “Pre-results review: Some results

  1. “Pre-results review” is what is elsehwere called “Registered Reports” if i am not mistaken. I am very critical about “Registered Reports”.

    1) One problem being that they seem to have been designed very poorly so that journals may in fact not even “register” anything, and/or that the “registration” seems to not be available to the actual reader of the paper. See: “Mapping the universe of Registered Reports” by Hardwicke & Ioannidis (2018).

    2) On top of that, i severely question whether it is scientifically valid and/or appropriate for journals/editors/reviewers to have the “submission evaluated based on the importance of the research question”. And i severely question whether it is scientifically valid and/or appropriate for journals/editors/reviewers to have a hand in the actual design of a study (and perhaps even the research question?!)

    3) This all, in my opinion and reasoning, actually possibly reinforces many of the problematic issues in science/academia by giving way too much power and emphasis on the journals/editor/reviewer system that may have plagued science for decades now.

    (Side note: “Registered Reports” how it’s currently done, is basically the same as what has been mentioned in a 1976 book by M. J. Mahoney called “Science as subject”)

    • Quote from above: “3) This all, in my opinion and reasoning, actually possibly reinforces many of the problematic issues in science/academia by giving way too much power and emphasis on the journals/editor/reviewer system that may have plagued science for decades now.”

      A possible additional problem with all these “improvements” is that they can be a step towards other things. Things could start with “Registered Reports”, which you might think is a “good” thing. Then you promote, and support, it.

      So, now “Registered Reports” are implemented at lots of journals. Hooray you think, this is “good” for science. However, after a while though, “Registered Reports” are being “supplemented” by all kinds of additional things.

      Things like “special editors” that possibly only publish the kind of research they like. Or, connecting “Registered Reports” to grants. Etc. Because it’s now been implemented in lots of places, these “supplemental” things can easily be implemented and almsot impossible to stop.

      That way, certain “improvements” are really some sort of “Trojan Horse”. They may seem “good” at first, but they can quickly result in “bad” things for science.

      • Quote from above: “That way, certain “improvements” are really some sort of “Trojan Horse”. They may seem “good” at first, but they can quickly result in “bad” things for science.”

        The “Registered Report” mess (where the registration is apparently often not even available to the actual reader of the paper) might be an example of this already: something is proposed and implemented that is just about the exact opposite of what the name implies and what has been talked about all this time.

        Also see the discussion concerning “open practices badges” here, where the implementation of the badges (based on flimsy and/or flawed “meta-scientific” research?) might also be just a step towards other things (i.c. having some “official” party verifying whether the badges were handed out “correctly”):

        https://blogs.plos.org/absolutely-maybe/2017/08/29/bias-in-open-science-advocacy-the-case-of-article-badges-for-data-sharing/#comment-33101

        • In line of this i am also increasingly more annoyed with so-called “collaboration” and people or institutions that “take it upon themselves” to “manage” all this “collaboration”.

          You could have one “institute” or small group of people that always asks for others to help out with some new “collaborative” project that is sure to provide some “meta-scientific” knowledge that is “good” for science. If this happens a lot, i fear that in the end you would give that one institute, and associated people, a lot of power and benefits.

          Funders, the media, young people desperately wanting to be “part of it all”, etc. will all flock to this one institute, and associated people. This institute, and associated people, will in turn get way too much power, influence, and probably are the only ones to reap a lot of the benefits of all this “collaborative” stuff.

          Perhaps in this way “open science” and “collaboration” can lead to “bad” things as well. And/or it may be some sort of “Trojan Horse” for a lot of unwanted stuff.

      • Quote from above: “Things like “special editors” that possibly only publish the kind of research they like. Or, connecting “Registered Reports” to grants. ”

        What’s also possibly useful, and funny, to think about in this light is “publication bias”.

        “Publication bias” is defined in many ways, but often includes something that relates to “the results” influencing the decision to publish. This has probably led to almost only publihsing “significant” results in Psychology for instance.

        Now, i reason “publication bias” may not only stem from “the results” but may also stem from things like “conflicts of interest”. In fact, perhaps you could argue that “publication bias” includes all things that lead to certain research not being published for whatever reason. In my reasoning this can include “special editors” that may have a certain view about which research is “important enough” to be executed, and subsequently published.

        So, “Registered Reports” may solve one form of “publication bias” (i.c. based on the results of the research) but may create and/or reinforce a different form of “publication bias” (i.c. based on the topic, whether it is deemed “important”, based on the researchers, etc.)

  2. Pre-results review is sometimes performed in grant proposal review or program shaping… but then it isn’t in a publication form and sometimes the scientists aren’t held to it.

    • I am pretty sure the next step in this “Registered Report” mess is to somehow attach money/grants to it.

      That way the journals basically decide which (type of) research get all the resources, AND they can have a say in the actual design as well!

      Another way to possibly mess science up even further with “Registered Reports”! (but if you shout “open science” and “incentives” a lot nobody will ever really think about it)

  3. Wouldn’t it also be important to allow potential research that may be useful from a policy perspective, even if the theoretical basis to justify it may not be as strong?

    Likewise, let’s say a researcher can get a dataset that no one has ever used before, and that it may be a source of an interesting exploratory analysis. Should the journal publish the results, stating clearly that they have no causal interpretation, as a way to encourage confirmatory research?

  4. Pre-results review, and pre-registration in general, does nothing to deal with a huge problem that I never seem to hear about: in many areas of psychology (social, developmental, maybe others) it’s standard practice for researchers to have their grad students “pilot” a study repeatedly, altering the procedure or hypothesis or measures or population in each iteration, until they can coax a theoretically-consistent effect to show up; only then do they plan the “real” study. The researchers then feel that they must have an especially robust theory+design because it was the only one of however many permutations that worked out. You could call it pre-garden forking paths: instead of taking multiple paths through the data from a single experiment until you find a path with p<.05, you take multiple paths to multiple experiments until you find one with an effect, then construct a very narrow explanation around the effect that links it to the theory you wanted to prove. I suspect this is one reason it's so difficult to replicate results outside of the lab that generated them, and why the authors are genuinely shocked–they've so finely tuned their research that generalization is impossible!

    • Michael:

      I think what you’re pointing to is a larger issue, that, ESP aside, just about any claim—or its opposite—can be given a theoretical justification that is considered acceptable by some subset of researchers in evolutionary psychology, economics, sociology, political science or other empirically-based social sciences. These are fields that excel in explaining just about any possible pattern in data.

      To put it another way: the above requirement of justification based on “theory” is not enough, as theory is so general it can cover just about anything. Even that ridiculous paper on beauty and sex ratio could be considered to have a theoretical justification

      • I agree that unfalsifiable theory plays a big role here, but my real concern is methodological: these researchers aren’t just tweaking the theory or the hypothesis in each iteration, they are tweaking their methods and measures, too. I realize now I should not have said they “construct a very narrow explanation around the effect” to link it to theory. What I meant to say was that they construct a very precise experimental setup to achieve the effect predicted by theory. The series of pilots is effectively sorting through mediators, moderators and interactions, but the researchers do not recognize this–nor its implications for interpreting effects–because the process is atheoretical and ad hoc. To their minds, they are just making the conditions of the treatment more salient to the participants. In reality, they have followed many paths to many versions of the experiment, each of which operationalizes the theory in a slightly different way or in a slightly different context; they selected and implemented the version that gets the desired effect; and finally they reported the effect as though they would have gotten it from any experiment that broadly operationalizes the theory.

        A quick example: a friend’s dissertation research hypothesized that little kids can infer the intentions behind aggressive interactions they observe. Over a series of pilots, the friend’s lab repeatedly tweaked the procedure, from real people acting out the scenario to using puppets and from video to live performance. They altered the appearance of the puppets, revised the wording in the script and in the measure, narrowed the age range of participants and had the mother leave the room. Admittedly, young children are difficult to work with so some accommodation is necessary, but you can see how it would be difficult to replicate their effects from a design description alone. Many social psych experiments follow the same procedure: select the effect, design a study that achieves it, and act like you did it the other way around.

    • Michael:

      Really tricky issues here that some comments JG Gardin made when I was graduate student (around 1984) might help.

      JGG1: You cannot rule out a hypothesis by the way it was generated.

      This is most clearly seen in math when “ridiculous” conjectures are made but then a subsequent proof is found that survives critical review by “enough” mathematicians.

      JGG2: But you should decide what to spend your time on and hence disregard “ridiculous” hypothesis.

      The problem as Andrew points out is that in academia today they will more than enough doing research on almost any “ridiculous” hypothesis.

      Now, in math this may not be much of a problem but in empirical research the barrier to ruling “ridiculous” conjectures – “subsequent proof is found that survives critical review by “enough” mathematicians” is replaced by “subsequent studies by “enough” experimenters to discern that the claimed effect regularly (in a statistical) can be seen”. The inefficiency of the latter puts us are great risk of accepting for some passage of time ridiculous hypotheses.

      CS Peirce tried to argue that an economy of research which prioritized the study of only “reasonable to scientifically informed us hypothesis” would avoid that risk, but in the end he couldn’t give a defensible rationale.

      Anyways, those comments by Gardin to eschew mindless or unmotivated pattern searches (like what you describe) is what lead me to go into statistics and I have been thinking about it ever since.

      • I *LOVE* the first quote–you should absolutely judge a hypothesis empirically, not on the basis of who posed it or how it got in its final form. It made me realize that I had misstated my objection (see my reply to Andrew’s comment), which is not to how the researchers arrived at their ultimate hypothesis but how they arrived at their ultimate test of that hypothesis. Pre-results review would definitely preclude monkeying with the hypothesis, what it would not preclude is having the hypothesis and the experimental procedure co-evolve so that the proposed study is not testing the broad theory that the researchers claim (and often believe) they are testing, but instead tests a narrow, special case of that theory. So, as to the second quote, the issue is more that we need to prioritize reasonable tests of reasonable hypotheses, and to assess that, reviewers would need to know the basis for the set of decisions that led to both.

  5. I’m not smart enough to predict whether this approach will work.

    So why don’t we get a journal to try it and see if it works? If it works good, then do more of it and try some tweaks to make it gooder.

    Goodness knows we’ve wasted lots of money on badder things than this.

    • Quote from above: “So why don’t we get a journal to try it and see if it works?’

      I dislike this sentiment a lot. It seems prevalent in parts of social science, and particularly in some corners that come up with a lot of “solutions” and “improvements”.

      # You can do a lot of irreversible harm this way (cf. “Primum non nocere” or “First, to do no harm”).
      # You can basically do what you want this way IF you just say things like “hey, let’s just see in 5 years if it worked out or not”.
      # Whether something “works” or not might be hard to measure (in the short term), and/or is subject to how it is being measured, and/or can of course be manipulated.
      # Finding out if something “works” is NOT the only thing to go by in science. I reason you can, and should, take other things into consideration. For instance, whether some proposal unecessarily give power and influence to the exact people, processes, and entities that probably helped mess things up in the past

      You can’t just go around and try stuff and see what sticks. That’s not scientifically responsible behavior in my opinion and reasoning.

      • Anon:

        Sure, but the flip side of this argument is that (a) realistically, you’ll never know whether this method “works” in any rigorous scientific sense, and (b) different journals can have different policies, and arguably pluralism is a virtue in itself in this sort of situation.

        • Quote from above: “Sure, but the flip side of this argument is that (a) realistically, you’ll never know whether this method “works” in any rigorous scientific sense, and (b) different journals can have different policies, and arguably pluralism is a virtue in itself in this sort of situation.”

          If i understood your comment correclty, i agree with a). This is part of why i think there is more to science, and proposing how science should work or can be improved, than “we can find out if it ‘worked’ or not via our meta-scientific research in X years”. I tried to make that clear in my comment above.

          I disagree with b). I do not think pluralism (in some cases) is necessarily a virtue in science. A quick reasoning results in me thinking that we should be talking about science, and scientific things, at all times. So, if i would propose something that’s not in line with science, it may result in pluralism but may be “bad” for science.

          Also, i think “we” should have learned from the last decade or so that certain things like doing resaeerch with 50 undergraduate students, and p-hacking the sh@t out of the analyses, may at one point in time have been defended by saying stuff like “hey, let’s see if it works out in the end” or “hey, arguably pluralism is a virtue itself”. Something that started out as contributing to “pluralism” can in the end overtake everything else. If that something is “bad” for science, then i reason this is bad…

          Furthermore, according to one of the links in the blog there are already 200 journals that offer “Registered Reports”. So, it may already be “a thing”, and irreversibly implemented, and on it’s way to overtake everything else. In my reasoning “Registered Reports” are reinforcing several problematic issues of science, and scientific publishing (see comment above somewhere).

          I am also not sure how to view the situation that certain “open science” people promoted, and implemented, a format that seems to often have left out the crucial part of “open” in “open science”: the opportunity for the reader to actually be able to read the pre-registration information. You would think that something that calls itself “Center for Open Science”, or someone that wrote a book largely about “open science”, would have their priorities straight when it comes to “openness”…

        • Quote from above: “I am also not sure how to view the situation that certain “open science” people promoted, and implemented, a format that seems to often have left out the crucial part of “open” in “open science”: the opportunity for the reader to actually be able to read the pre-registration information. You would think that something that calls itself “Center for Open Science”, or someone that wrote a book largely about “open science”, would have their priorities straight when it comes to “openness”…”

          What helped for me to understand this (And some other things) is to not call it the “Center for Open Science” (COS) but the “Controlling of Science” (COS).

          I have started to think that a lot of their actions, and/or proposals, and/or promotions, are (ultimately) possibly more about controlling science than they are about open science.

          I think promoting “Registered Reports”, where the introductory papers about it seem to have not clearly stated that the pre-registratoin should really be made available to the reader via a simple link in the paper, is just one example of this.

          Other examples may include possibly setting up, or allowing, some 3rd party to “control” if the “open practices badges” are handed out “correctly”. And yet another example could be all these large “collaborative” efforts that, at least to me, only seem to benefit a small group of people, and give a small group of people way too much power and influence. That’s a form of, and a lot of, “control” as well i reason.

          Anyway, i wouldn’t be surprised if they would come with all kinds of things to further “control” matters. I am curious for instance what they could do with pre-print servers, where everyone can publish papers at this point in time. I wouldn’t be surprised if somehow that will become increasinly more difficult, because of some form of “control” concerning that.

      • I agree that some things are really bad and we shouldn’t try them even once.

        This seems pretty harmless, though. There are lots of terrible journals. I don’t see why one more would be so bad.

  6. Some care is needed in framing these (valid) criticisms. People trying to do the right thing often discover that the experts keep bringing up new things that must be done.

    If you increase power, ppl say but you didn‘t consider measurement error. You then do that, then the experts say but you didn‘t consider theory. They then develop a theoryand thenthe expert says you didn‘t i plement it. The. You implement it computationally and the expert says you have too many free parameters. And so on. I have this tendency (not to say i am an expert). This is more a note to myself to avoid too much criticism without sufficient amount of praise.

  7. Despite the many comments already, I have a few thoughts as editor of Judgment and Decision Making.

    First, I totally agree with Andrew’s reference to the need for “theory”, although I prefer to think of it as the need for plausible explanations (roughly, causal mechanisms, given a limited set of standard causes specific to the discpline). I also agree that what counts as plausible in one field may not in another. (Evolutionary explanations often seem like magic to me, and, yes, they almost can explain anything.)

    But I don’t think we can get away from judgments of plausibility by begging humility about our ability to make these judgments. This sort of humility seems to be part of the support for pre-registration: “If they pre-register and it works, who am I to say that it doesn’t make any sense?”

    We have begun to deal with registered reports. That is different. I think they are most useful when someone wants to do a risky study, e.g., one that requires considerable resources and might not work but could have a big payoff. Replications of doubtful results could count, but not just this.

    I was initially worried that it would be hard to find competent reviewers. (That is ALWAYS hard. The few who are both competent and willing, in any given area, are overwhelmed, in part because of the common practice of submitting papers to one journal after another, accumulating new reviews each time.) I thought that part of the fun of reviewing is finding out what actually happened. But it seems that many people think that kibbitzing on someone else’s study is fun too, like doing all the clever work of experimental design without having to do the drudgery of collecting the data.

    But one worry I have is that the line between reviewers and co-authors is often very fuzzy. Many co-authors do pretty much what the reviewers do. Nobody else seems to worry about this yet.

    I also think that some of the comments before mine point to other concerns about what happens if registered reports become common practice. So far I have tried to prevent that by limiting them to studies that seem risky in the sense described.

    Another minor worry is how a pre-registration proposal gets rejected. In principle, we could go through draft after draft, like a long negotiation. The solution is to stop it at some point. But note that a rejection of a proposal is not a commitment to reject the paper if it is ultimate done anyway. It is possible to say, “We need to stop this now. We have not agreed. But if you take all our comments to heart and submit the paper, it is likely that it would be accepted.”

    In sum, I think that, if properly limited, registered reports are a good thing, and the problems can be solved.

    • “But one worry I have is that the line between reviewers and co-authors is often very fuzzy. Many co-authors do pretty much what the reviewers do. Nobody else seems to worry about this yet.”

      I have (e.g. see 1st comment above). It’s just one of the things about “Registered Reports” that doesn’t make sense to me. You give reviewers the role of being a co-author without them being rewarded as one, and without them being known to the reader of the paper.

      What you can also get here is that there will be certain “experts” who are being asked to review (i.c. being able to steer the actual design of research in a certain way) over and over again. You could end up with a few people (anonymously) dictating large parts of what, and how, things are being researched.

      Also think about all the “pressure” authors had to p-hack and do all kinds of stuff just to “please the reviewer”. I wonder how that will work out when reviewers can have a say in the design of the study.

      The whole thing also shows the absurdity of the journal-editor-reviewer model of publication to me. Somehow something is now “good” because 2 peer-reviewers (who basically funtion as co-authors) tell me how to perform my study, but if i submit my paper with 2 co-authors (who basically functioned as peer-reviewers) it’s not “good” enough because it hasn’t gone through “independent peer-review” !?!

  8. Theory and measurements seem to be the main problem for social sciences. It seems to me that in many social science disciplines, existing theories do not provide much prior on plausibility of a new finding (more precisely, not much beyond what a little common sense would provide.)

  9. Quote from the blogpost; “Methodology is important, but methodology is not just causal identification and sample size and preregistration”

    Yes!

    I think (the gist of) this also fits nicely with your blogpost and/or paper that mentioned that openness and transparency are not enough. I think it’s highly problematic when only certain aspects of science and/or the scientific process get talked about or higlighted. I also think this might be related to, or can lead to, unfairly or prematurely equating certain things with “good science”.

    I fear this may have already happened a lot with “open science”, where it is somehow viewed as “the ultimate solution” to all science’s problems. I reason “open science” does not necessarily imply it is “good science” (and apparently it doesn’t even always imply it’s actually open and transparent, e.g. see the paper about an evaluation of “Registered Reports” mentioned in this comment section).

    The same possible problematic process of equating certain things with other things, is with talking about things like “the incentives”. It’s a vague term to me, and i fear it is often not really thought about. I fear that because, somehow, a lot of people want to do something about “the incentives” this in turn is almost automatically seen as something “good” for science. I think that this process can easily lead to coming up with all kinds of stuff to “incentivize” that may not even be “good” for science.

    So my conclusion of the above is that both “open science” and “the incentives” are terms that are being thrown around a lot in recent discussions, but both are not directly about science, the scientific process, or how to optimally perform science. I think this is problematic, because i reason if you’re not careful you are mostly talking and thinking about “open science” and “the incentives” when you should be talking and thinking about science, and the scientic process, and how to optimally perform science.

    I think it might be crucial in scientific discourse, and reasoning, to:

    1) Be as precise as possible and necessary when talking and thinking, about science, and the scientific process, and how to optimally perform science.

    2) Have a “big picture” view that encompasses all the many things that could be important in science, and the scientific process, and how to optimally perform science.

    3) Use/switch between/combine 1) and 2) when talking and thinking about science, and the scientific process, and how to optimally perform science.

  10. Anon said,
    “So my conclusion of the above is that both “open science” and “the incentives” are terms that are being thrown around a lot in recent discussions, but both are not directly about science, the scientific process, or how to optimally perform science. I think this is problematic, because i reason if you’re not careful you are mostly talking and thinking about “open science” and “the incentives” when you should be talking and thinking about science, and the scientic process, and how to optimally perform science.

    I think it might be crucial in scientific discourse, and reasoning, to:

    1) Be as precise as possible and necessary when talking and thinking, about science, and the scientific process, and how to optimally perform science.

    2) Have a “big picture” view that encompasses all the many things that could be important in science, and the scientific process, and how to optimally perform science.

    3) Use/switch between/combine 1) and 2) when talking and thinking about science, and the scientific process, and how to optimally perform science.”

    Good points. But, as is so often the case, how to implement them without reducing them to meaningless formulas is a non-trivial problem.

  11. Quote from the blogpost: “In a nutshell, pre-results review has helped helped authors improve the methodological quality of their work (…)”

    I am not sure, but it looks to me that this statement is based on a n = 12 self-sreport style evaluation.

    If this is correct, the quote above seems highly sub-optimally phrased.

    Perhaps the 12 people merely *think* their methodological quality of their work has been lifted, but nothing may have actually been improved, nor does it necessarily imply the final research proposal, execution, analyses, and paper are actually “good”.

  12. Quote from the blogpost: “Over that past year, BITSS has been working with the Journal of Development Economics to pilot this form of peer review, and we recently shared some of the lessons we learned through a post on the World Bank’s Development Impact blog.”

    If i am not mistaken, it looks like 5 proposals were accepted at the “Stage 1” phase, and 25 rejected.

    I find this interesting in relation to what is mentioned in the comments somewhere here that “Registered Reports” may actually possibly reinforce many of the problematic issues in science/academia by giving way too much power and emphasis on the journals/editor/reviewer system that may have plagued science for decades now.

    I am very interested in why these 5 proposals were accepted and why the 25 were not.

    If i understood thing correclty, i can read 3 reasons on the blog that are being connected to rejection of which 1 seems highly problematic and in line with my worries expressed above: making a strong case for the importance of the research question in terms of its potential contribution to the literature. I feel this is too subjective, too generally stated, and can easily be used to only publish certain work, by certain people.

    The other 2 (power calculations, and a detailed enough analysis plan) seem to me to be possibly easily noted and changed at the “Stage 1” phase, and definitely NOT reasons for rejection (especially in light of the supposed “benefits” of all this pre-results review).

    (Side note: I’ve read the (some) journals keep track of when a “Registered Report” has been withdrawn from submission by the authors. Perhaps they could keep track of rejected submission by the journal as well, including the authors names, and title, and abstract, and perhaps the proposal as well).

Leave a Reply to Anonymous Cancel reply

Your email address will not be published. Required fields are marked *