Great news: “From next year, eLife is eliminating accept/reject decisions after peer review, instead focusing on public reviews and assessments of preprints.”

Valentin Amrhein points to this new policy from a biology journal. Here it is:

From next year, we will no longer make accept/reject decisions at the end of the peer-review process; rather, all papers that have been peer-reviewed will be published on the eLife website as Reviewed Preprints, accompanied by an eLife assessment and public reviews. The authors will also be able to include a response to the assessment and reviews.

The decision on what to do next will then entirely be in the hands of the author; whether that’s to revise and resubmit, or to declare it as the final Version of Record.

Here’s the journal’s five-step process:

1. Submission

Submit your paper and you’ll hear if it is invited for peer review.

2. Peer review

Your paper undergoes consultative review by experts in the field and a publication fee is collected. You will then receive an eLife assessment, public reviews and confidential recommendations from reviewers on how to improve the paper.

3. Publication

Your paper is published on eLife’s website as a Reviewed Preprint along with the eLife assessment and public reviews. It is then citable.

The eLife assessment reflects the significance of the findings and the strength of the evidence reported in the preprint. You will also be able to include a response to the assessment and reviews.

4. Author revision

You control which revisions to make, and if and when to resubmit. If you revise, we will publish a new Reviewed Preprint with updated reviews and assessment.

5. Version of Record

At any point following peer review, you can choose to have your Reviewed Preprint published as the ‘Version of Record’.

This sounds just great.

45 thoughts on “Great news: “From next year, eLife is eliminating accept/reject decisions after peer review, instead focusing on public reviews and assessments of preprints.”

  1. This generated a lot of discussion when it was announced (Oct. 2022), and I wonder how its first month of implementation has gone. I like the experimentation with publishing, but there are aspects that I and others I’ve talked to find puzzling.

    (1) Desk rejection — i.e. immediate editorial rejection without review because your paper doesn’t seem cool enough — still exists. I’ve published one paper in eLife and had two desk-rejected in the past ~5 years.
    (2) It still costs $2000 to get your paper “published” in eLife. This is down from $3000, by the way.
    (3) I think there’s less motivation to *review* a paper for eLife now, since the review doesn’t really matter.
    (4) Combining all this, is it really that much of an advantage to just having the preprint posted? I suppose this way one knows that *someone* has read the paper!

    Nonetheless, good for eLife for trying something different.

      • Another thing to consider for reviewers is it seems an infinite number of revisions are allowed before the author decides on the final version. So if you agree to review for a paper does that mean you might be on the hook for 10 revisions?

      • Elife are providing a service that is not free. They curate scientific papers. To do this, editors (and subsequently reviewers) have to read the submitted papers in more or less detail and then make decisions (interesting vs not interesting). This curation is useful! For papers selected as “interesting”, one can then read reviewer reports that are an independent assessment of quality (the reviewers could subsequently point out flaws that make the paper “not interesting”). I don’t understand how this process could be done well for free.

        • A couple points:
          1. How do we know the editors are doing a good job selecting interesting articles? I submitted a paper to eLife which I consider to be a seminal paper in computational biology and it was desk rejected.

          2. How do we know the reviewer reports are finding flaws? I’ve been constantly finding serious errors in high profile journals for work in my field.

          To me this service of curation is similar to IMDB or Rotten Tomatoes, neither of which anyone pays for. It’s actually impossible for me to think of anyone I would pay a couple thousand dollars to just to get their opinion on one of my papers. Heck I wouldn’t even pay 10 bucks.

        • How do we know the editors are doing a good job selecting interesting articles?

          I guess we don’t – and there’s a clue in the word “interesting”. i.e. it’s subjective. No problem with that – it’s their journal. Agree tho that it’s maddening as an author to have your manuscript (MS) rejected at the submission phase.

          As I understand it eLife rejects ~ 70% of submissions in their initial triage. About half of the 30% sent for review are rejected in the review process. So around 15-ish% of submitted papers are published.

          In the new system it seems like they will still lose around 70% of submissions in the initial triage phase. Everything else is published even if it’s substandard by their own criteria (e.g. a paper can have “Incomplete” or “Inadequate” support for the interpretations made but will still be published).

          Thinking about publication charges, Up to this year the 15% of submissions received that get published are each charged £3000. Now the 30% of submissions received that get published are each charged £2000. That seems like a decent increase in revenues combined with (it remains to be seen tho) a drop in “quality”/”selectivity”. Interesting to see how this plays out but it doesn’t address the main issues with contemporary scientific publishing.

        • Quote from above: “The point is are the editors qualified to make such important decisions without reviewers’ input.”

          Peer-review is seriously one of the things I find most shocking concerning current day science. It doesn’t make much scientific sense to me. To name a few reasons:

          1) I think it’s reasonable to assume most papers published after the current editor-peer review process are by scientists with a diploma of some sorts. If peer-review is somehow seen as being necessary for quality control or something like that, does this mean that you view these scientists with a diploma of some sorts as possibly incompetent? And if so, should this problem be solved differently (e.g. better education)?

          2) I think it’s reasonable to state that we have some sort of customs concerning authorship and intellectual credit or whatever the appropriate word is here. That makes scientific sense to me, also because it makes it easier to look up other papers by author X if I view a certain paper by author X as being good, etc. Isn’t peer-review messing this up in a certain way and to a certain extent? Who knows what part of certain papers were co-written by anonymous peer-reviewers. Who knows what incompetent scientists are helped out over and over again, and subsequently appear more competent than they actually are due to crucial reviewer comments. Etc.

          3) If it is somehow seen as good that more people look at a paper because it is reasoned that this enhances the chance that mistakes are spotted or stuff like that, why is it not good enough that my co-authors read the paper? Or why is it not good enough that I let a few of my colleagues read it?

          The entire peer-review thing makes no scientific sense to me. It only makes sense to me when viewing editors and peer-reviewers as a way to control things and only let certain stuff be published and/or higlighted.

          It is simply nonsense to me.
          Nonsense I tell you.

        • Quote from above: “It doesn’t make much scientific sense to me.”

          I am pretty sure there is or will be some sort of Communistic takeover attempt of Social Science where once you get your badge after pledging allegiance to the community and collaboration you are then part of some big group who will write and investigage things as a gang of collaborators.

          Talk about a way to control things and make sure certain voices are not heard and certain things are not seen by the way! If you can get away with making this the standard, individuals are pretty much drowned and driven out by all the groups and gangs of collaborators.

          Another result of this might be that nobody really knows who wrote what, or who came up with what in these big collaborations as well. But that doesn’t matter, science is a collaboration after all. Who cares about intellectual credit or merit or stuff like that. It’s not that these things might be important to make sure the best scientists are able to get and do work. It’s all about the collective, the community, and collaboration!

          And, lots of people will have read the paper. And perhaps even more will have peer-reviewed it as a preprint because as a group you shout so loud that certain kinds of people might actually listen to you. So given all of these eyes that looked over your paper, you just know it’s good after that! And can you imagine what this would do to people critical of your work? Their voices will be drowned out by the masses of collaborators, who will no doubt have a great scientific attitude and complementary abilities.

          It’s totally not unscientific at all to do things like this.
          It’s for the best, because it’s for the community!

        • Anon:

          You write, “I am pretty sure there is or will be some sort of Communistic takeover attempt of Social Science where once you get your badge after pledging allegiance to the community and collaboration you are then part of some big group who will write and investigage things as a gang of collaborators.”

          As a social scientist, I can assure you that this has not happened, nor is there any plausible pathway to it happening in the future. Please let’s keep the blog comments on a more sensible level.

        • Communism: A theoretical economic system characterized by the collective ownership of property and by the organization of labor for the common advantage of all members.

          I think there are many recent developments in parts of social science that might have overlap with this general idea of collectivism and orginazation of labor.

          If I am not mistaken, there are voices that talked about having people be assigned as replicators (scientists who just focus on replicating studies). There are large scale collaborative efforts where tasks are devided, and certain people do certain things. There has been talk about how social science needs to somehow become more collaborative (even though I reason that it has been collaborative for decades and that certain forms of collaboration might be bad for science).

          In short: lots of things recently in my view that focus on the collective.

          I think that’s possibly scientifically harmful, but if you don’t see any dangers concerning this that’s good to hear I guess.

        • Anonymous (ao), regarding the 3 listed reasons they don’t like peer review:

          1) First of all, a diploma in and of itself is hardly enough to judge someone’s work. What you’re arguing here is sort of a reverse ad-hominem; assuming their research is perfectly fine just because of a title the authors hold. Even if you are sure they are competent, it’s still good to review their work to determine if there are any areas of improvement, or if the article is a good fit for the proposed journal. Even if the article is really good, that doesn’t mean it lives up to the standards of the journal; to get published in the Annals of Statistics (for example) it’s not enough to be good, it has to really be the cream of the crop.

          2) Do you really believe that reviewers are “co-writing” papers? Do you really believe that reviewer comments are enough to turn a bad article into a good one? The most they do is offer insight that might or might not be helpful. That hardly takes the bulk of the responsibility away from the reviewers. I’m sure there have been situations in history where a lone anonymous reviewer went above-and-beyond to salvage bad papers, but I’m willing to bet that doesn’t happen nearly enough to be an issue of authorship credibility.
          Also, doesn’t this point contradict your first to a certain extent? If you’re willing to believe that anyone with a diploma is competent, how can you then turn around and argue that “Who knows what incompetent scientists are helped out over and over again, and subsequently appear more competent than they actually are due to crucial reviewer comments…”?

          3) Can you *really* not see the issue with just letting authors themselves be the sole authority of whether or not their work is good enough for publishing? Even the biggest fool in the world still has to capacity to believe their work is amazing. I don’t see how you’re arguing that the logic of “just trust me bro” is more scientific than reviewing.

          I definitely have issues with the peer-review process, just like everyone else, but you seem to be arguing that reviewing is in-and-of-itself “unscientific”. That’s a huge step too far for me.

        • Quote from above: “I think there are many recent developments in parts of social science that might have overlap with this general idea of collectivism and orginazation of labor. ”

          Ow man, this is actually pretty fun to do. Let’s look and see if I can find more recent examples that might focus on the collective and associated things and issues, and fit with the view that this stuff might be promoted.

          If I am not mistaken, there has been a paper published talking about crowdsourcing and how this might be good for certain stuff. Side note 1: if you write a paper about the benefits of working with many people but write it with only a handful of authors, do you in doing so present evidence for or against your thesis? Side note 2: does crowdsourcing possibly lead to outsourcing and crowding out (e.g. see Binswanger, 2014)?

          If I am not mistaken, there has been a paper published talking about redefining statistical significance. Is it likely that this new proposed significance level might in practice be easier attained via large collaborative projects using many participants?

          It’s stuff like this that makes me think what I wrote down. But perhaps we could let the wisdom of the crowd speak on all of this to see if it makes any sense. That’s science after all!! I think I read somewhere that guessing how many marbles there are in a jar is best done with large crowds, or something like that. I guess that data and evidence can totally be applied to science and argumentation and evidence for example. So, perhaps we should all have a crowdsourced vote about this stuff. Or maybe just have a few people who are group leaders of some sort comment to attempt to sway public opinion.

          I hope that was sensible enough.

          Sometimes I wonder if what is even scientific anymore…

        • Quote from above: “I think there are many recent developments in parts of social science that might have overlap with this general idea of collectivism and orginazation of labor. ”

          One more then, for good measures.

          Wasn’t there a paper a few years back about prediction markets where people had to decide which findings would replicate or something like that based on reading the abstract? And, was it somehow proposed that this could be used to somehow decide which papers to actually replicate in the future, instead of for instance having the intrinsic interest of a scientist deciding what to replicate? Did this prediction market stuff involve many people?

          Anyway, you get the idea by now hopefully. Perhaps many more future similar projects can be listed in this list. I won’t be around anymore for that. I have had more than enough. In fact I am sick of it all.

  2. In all seriousness I could see this transition being very successful for eLife. In biology we often have to officially publish our papers even when we don’t really want to (we have to show our funder demonstrative progress on a grant, or a grad student needs a publication to graduate, etc.). Currently when people are desperate to get a paper published they’ll send it to a mega journal like Plos One or Scientific Reports. As long as these reviewed preprints count as a publication in the eyes of a funder/institution, then it seems like a better route because as long it doesn’t get desk rejected you can decide to have it published no matter what the reviewers say. No additional experiments, etc. The only potential downside is if you get negative reviews those will be public, but who’s actually going to read those? And your response to the reviews will also be posted so you could always just claim their concerns aren’t valid and have the last word.

  3. I’m not getting how this would work with the whole tenure review side of academia. When I went on the job market, departments were single-mindedly focused on your publication record. It was to the point where many departments had an actual number which denoted the value for a top publication in each of the top fifty journals. In order to make tenure, you had to get a certain number of points. If everyone can get published anywhere, then what are they going to use for their metric instead? Your tap dancing ability?

    • >then what are they going to use for their metric instead? Your tap dancing ability?

      Hopefully the quality of your work, where citations and publications in so-called top journals are not used as a proxy for quality. But I’m not in academia, so I wouldn’t know.

    • If I understand your point correctly, you caution that a widespread application of eLife’s review pattern would lead to anyone being able to publish in any journal, which in turn will lead to some kind of regression towards the mean. That in turn would destroy the reputation hierarchy of journals.
      I would agree to the causality of your argumentation based on your assumptions, but I would choose different assumptions to start with.
      Your argumentation appears to omit the preview by an editor. I consider this assumption relevant because as long as there is an editor previewing the paper, there is a gate-keeper. This ‘gate-keeping process’ would obviously need to be adjusted in order ensure that the quality of the publications is high enough.
      If all journals in one field of research were to adopt this mode of review, I guess the different journals would create different entry barriers in that preview process, thus solidifying the ‘hierarchy’ among the journals. I therefore don’t believe universities would have to discard all of their metrics in determining who receives tenure.
      I’m curious if that kind of peer review is going to become more popular. In my native field of economics I have read a few things about a shift in paradigm towards rethinking how peer review can become better. In that sense I appreciate changes such as from eLife, because they promote some debate on how to improve scientific discourse!

    • Probably something much sillier, like the cumulative quality of work you’ve put out over the course of your career to that point. If only we had a convenient points system!

    • Botekin,

      Worst case ,the T&P committee might end up having to read some of your published work to decide whether you warrant tenure. What a drag that would be.

  4. You are not understanding the deal Andrew. They still have acceptance, it just different tiers of acceptance.
    They have the tiers:
    – Landmark
    – Fundamental
    – Important
    – Valuable
    – Useful

    That’s the same thing that Nature has, with “Nature”, “Nature Genetics”, “Nature Communications”.

    But instead of having different journals, they have many journals inside one journal.

    Now people will say:
    “Wow he has three e-life landmarks!”
    “Ah, he only publishes e-life usefuls.”

    So nothing changes. If anything, the hierarchy and gatekeeping becomes worse.

      • The paper can have no label, which in essence is a rejection with an implicit “useless.”

        So in sum that’s all that there is to it. The same journal has different tiers, and “rejection” is still there, it’s just a quiet rejection, where you still post the pre-print.

  5. The main (and quite small IMO) advantage of this change seems to be that it reduces the time for a “validated” (i.e. peer-reviewed) manuscript to appear online. Otherwise the approach seems a little arrogant on the part of the editors (and it doesn’t really address any of the major issues around scientific publishing).

    Normally a manuscript (MS) is submitted to a journal; there is a certain level of triage (strong to non-existent depending on the “quality” of the journal); if the MS passes triage it gets sent to reviewers; an editor makes a decision to pursue or not (i.e. reject) based on reviewers feedback and an MS that passes this test is likely to go through one (maybe two; rarely three) rounds of revision.

    It’s then published and the paper makes it’s way in the world according to the interest (and citations) it receives. The true value of a paper accrues in the weeks, months and years after publishing.

    In the new eLife process the editors/reviewers seem to wish to “define” the value of the paper from the outset through their “eLife assessment” in which they can decide that the paper is “Landmark” or merely “Useful” and so on, and the support of the interpretations “Exceptional” or “Compelling” or even “Incomplete” or “Inadequate”. What’s the point of this conceit on the part of the journal? If the paper is publishable then just publish it and let others decide whether it turns out to be a “Landmark” or whatever? And why publish papers where the supporting evidence is “Incomplete” or “Inadequate”?

    Can’t see this making any difference to scientific publishing. In this particular case I imagine that interested readers will ignore the “eLife” assessment” and treat the paper as they normally would (“is it interesting/useful to me?”). A downside might be that researchers that like eLife as a place to send their good quality papers because they like the review process and the journal’s selectivity, might decide that they’re not so interested in a journal that has chosen to lower its standards by associating itself with papers that might otherwise be rejected as having “Incomplete” or “Inadequate” support for its interpretations. I guess we’ll see how this pans out.

  6. Michael Eisen (EIC at eLife) just made his Twitter private so I can’t see what he tweeted, but it looks like this might have been a short-lived experiment. Maybe some funders don’t consider these articles to be published which limits who can use this model.

    • His twitter is public again, he claims the model is thriving but he’s been personally attacked for it:

      All his follow up tweets seem in direct conflict with his first tweet “I’m sorry. I tried.”.

      Unless he’s implying that he’s being pushed out despite the claimed success of eLife’s new direction, not sure.

      • Jordan:

        I followed the links, and . . . damn! I have no idea what’s going on there at all! I can imagine that changing a peer-review process could be controversial, but I wouldn’t have thought it would involve personal attacks.

        Biology is just such a big-budget field compared to statistics or political science. The stakes are higher, everyone has these big labs, publication in journals is such a big deal . . . a much different world than what I live in.

        • I also don’t know what’s happening obviously, but one thing I could imagine is maybe previous authors of articles at eLife are upset. eLife had advertised itself as a selective, prestigious journal, so a lot of people published there because of that. So if eLife now develops a reputation for publishing anything then it could devalue those previous publications (when looking at a resume will people distinguish in their minds eLife articles published before and after a certain date?).

          It’s hard to imagine why people who haven’t published at eLife would be upset because eLife is just one of many journals, if people don’t like their model there are tons of other journals for them to publish in.

          P.S. I had referred to this in a previous comment, but my article which was desk rejected at eLife has been invited to be reviewed at a journal with a much higher impact factor, so I am personally opposed to desk editors thinking they know which articles are important enough for their journal, which will continue in eLife’s new model. To be honest the most important factor seems to be knowing the editor of the journal.

  7. There is already enough gatekeeping at the grant level (which is far too centralized, but that is a separate issue). For a paper you only really need two labels:

    1) Do the authors describe their methods in enough detail so someone else can attempt a replication?
    2) Do the authors make any interesting predictions, ie that would be surprising if the alternatives people can come up with were true?

    The current system has resulted in failure to replicate ~80% of claims that an effect is statistically significant in a positive/negative direction. And that is a very weak criteria to begin with, with sufficient sample size you would get ~50% replication rate because there is always some kind of difference between two groups.

  8. The impact factor for PLOS One is ~4. It would be closer to 1 except for a few papers that are relatively well cited that were perhaps submitted for fast publication due to competition concerns. This plan is in effect to turn eLife into a version of PLOS One that however will be more work for Editors, readers, and reviewers. I would expect that these plans will make the impact factor of eLife less than half of its current value fairly quickly. In essence lots more work for everyone including readers and no quality control. A sad end for the aspirations of scientific quality as a diver for eLife.

    • What extra work for the reader?

      We shouldn’t be uncritically reading the papers just because they were peer reviewed. The only thing that has ever been shown to do is enforce the status quo by stifling observations or ideas that are inconvenient to whatever standard narrative. In fact, peer review probably makes it more difficult to interpret a paper because you know it has gone through that filter. Ie, findings with low prior probability are actually more reliable than those with high because they’ve undergone more stringent selection.

      Then there is the problem shown by the cancer replication project that no one can replicate ~50% of what gets published even *in principle*.

  9. To provide yet another example (possibly see somewhere else on this page for other examples) of how large-scale collaborative (and controlled?) efforts might be facilitated and/or promoted I just want to share the following.

    One way to do this as well, is to focus on reporting requirements regarding samples, and to encourage “diverse” samples. If I am not mistaken the APA and/or certain journals are mentioning this more and more. This could lead to reporting requirements regarding the samples, and how “inclusive” and “diverse” they are or aren’t.

    Step by step (e.g. see the tactics of the TOP-guidelines, or as I like to call them the FLOP-guidelines) this could lead to first reporting this stuff, and then providing reviewers with reasons to not recommend publishing because the sample is not “diverse” enough, and then simply requiring “diverse” samples or otherwise the paper is not being published.

    Of course this might all not happen, but it’s just one more avenue to facilitate and promote and possibly nudge, steer, and direct matters. And that’s what I wanted to note here. These kinds of recent efforts might all point to the same thing if you see the dots and connect them, but that could also all just be a coincidence. I am sure the APA and COS and all these kinds of organizations work together for the good of all and everything!!

    Side note: I am particularly looking forward to possibly heaing how the people behind these efforts, and the ones supporting them, think about how these efforts might directly or indirectly hurt actually “diversity” (e.g. see the paper titled “No more psychiatric labels: Why formal psychiatric diagnostic labels should be abolished” and the section titled “colonialism” of S. Timimi, 2014, p. 212).

    • Quote from above: “I am sure the APA and COS and all these kinds of organizations work together for the good of all and everything!!”

      Did the ancient Greeks and philosophers and scientists of 1600’s, 1700’s, 1800’s, etc. also have these bureaucratic parties attempting to influence matters?

      It all seems so restrictive and un-scientific to me. Not to mention the possibility that certain harmful people might especially be attracted to being part of, and leading, these kinds of organizations. Psychopaths, for instance, might seek power and status and could be attracted to these kinds of positions in these kinds of organizations.

      It all just seems so weird to me. I mean, who are you to tell me about your “guidelines” and “recommendations” and your “nudging” and “aligning incentives” stuff which might very well be influenced by your incompetence and/or your corruption (as possibly evidenced by your past behavior and/or recommendations).

      • Did the ancient Greeks and philosophers and scientists of 1600’s, 1700’s, 1800’s, etc. also have these bureaucratic parties attempting to influence matters?

        Well yes – especially the Catholic church in 1600’s and 1700’s! But I know what you mean. I think there are four reasons why science has gone from a largely amateur and relatively “unmanaged” pursuit especially in 1800-1900’s that was associated with a huge leap in formal knowledge about the world (this amateur-style nature of science dominated perhaps til the 1970’s and actually still exists in part)… to today’s.. well, I don’t know how to encapsulate it a single word or phrase – it certainly seems “messy”.

        Here are my reasons as concisely as I can manage:

        1. Until early-mid 20th century science was mostly done by individuals or small groups interested in finding stuff out, largely outside the public gaze. E.g. Insulin was discovered and distributed in the 1920’s within this “amateur” ethos (including the contribution of Eli Lilly who developed large scale production and distribution of insulin in collaboration with the Toronto discoverers). A huge amount of low-hanging fruit was “harvested”. By the time of the polio vaccine, science, especially biomed science, was becoming more organized (March of Dimes; setting up of WHO in 1950’s) with inevitable bureaucratic elements to organizing biomed science efforts towards specific aims.

        2. Science up til maybe the mid 50’s was revered as a public good. With widespread understanding of the role of science in the atomic bomb, the developmental disorders associated with thalidomide and then in early 1960’s, Rachel Carson’s Silent Spring, and later the problems with antibiotic resistance, the lustre of science was increasingly tarnished (not necessarily a bad thing IMO). In any case focus grew on real and potential dangers of scientific endeavours. This has also resulted in bureaucratic oversights associated with regulation, for example of molecular genetic techniques; animal use; drug development; and around general ethical issues etc.

        3. Related to 1.) the old amateur and very successful approach of science to “finding things out” according to the interests of scientists (we can still do this to some extent especially in a Uni environment) has been significantly superceded by a focus on issues of societal concern, especially health issues (smoking/other carcinogens and cancer; driver safety; dietary effects; antibiotics and overuse; vaccines and other therapeutics) but also and especially global warming. This has lead to organized attempts to interfere with science and especially to misrepresent its findings on issues (like those just mentioned) that are susceptible to politicization. So we have broad attacks on science that brings together elements of corporate sector, mostly “right wing” (it has to be said) politicians, but also purveyors of conspiracy theories and of course anti-evolutionists. Some of these have interests in presenting science or elements of this as a dubious exercise and in promoting agenda-led oversights. There hasn’t been a robust response to this on the part of science as far as I’m aware (the Dover trial was a fantastic response to anti-evolutionists but don’t think one could say it was part of a concerted response to anti-science or agenda-led misrepresentation).

        4. The internet!!! Nuff said. The thought of trying to summarize its contribution to the sorry state of modern life, let alone science, makes me queasy.. On the other hand science as an evidence-based pursuit of knowlege is pretty indestructible so long as we don’t cede power completely to the barbarians…

        • The anti-evolutionists didn’t believe in evolution.

          Today we have something new: People who believe in evolution but think it sucks at its job and they can do better. Even worse, the vast majority don’t even know what a p-value is yet use them for everything, let alone have the skills to even do simple mathematical or computational modelling. I know, I was trained to be like this and only escaped due to some unique circumstances.

          It is institutionalized Dunning-Kruger at super-national scale, and I think it is a far bigger threat to progress than people who take the bible literally.

        • Thanks for sharing your thoughts!

          I agree with Anoneuoid who states that:

          “It is institutionalized Dunning-Kruger at super-national scale, and I think it is a far bigger threat to progress than people who take the bible literally.”

          I think there should be much more attention for wisdom in social science.
          I think there should be much more focus on rational thought, reasoning, logic in social science.
          And, I think there should be much more attention for the princple “Primum no nocere” (first, do no harm) in social science.

    • My favorite part about hearing journals and editors and even certain scientists talk about “inclusion” and “diversity” is pondering whether they have ever thought about their own functions and actions in relation to the “inclusion” and “diversity” of thoughts, ideas, reasonings, evidence, research, etc. available to, and presented by, those willing to participate in the search for knowledge and truth.

      Could it be that journals, editors, and certain scientists have been incredibly “non-inclusive” and “non-diverse” when it comes to this all in the past five decades or so? And if so, are journals, and editors, and certain scientists simply incredibly incompetent and/or short-sighted and/or elitist folks that say one thing but do, and have been doing, another thing?

Leave a Reply

Your email address will not be published. Required fields are marked *