How to think about papers published in low-grade journals?

We’ve had lots of lively discussions of fatally-flawed papers that have been published in top, top journals such as the American Economic Review or the Journal of Personality and Social Psychology or the American Sociological Review or the tabloids. And we also know about mistakes that make their way into mid-ranking outlets such as the Journal of Theoretical Biology.

But what about results that appear in the lower tier of legitimate journals? I was thinking about this after reading a post by Dan Kahan slamming a paper that recently appeared in PLOS-One. I won’t discuss the paper itself here because that’s not my point. Rather, I had some thoughts regarding Kahan’s annoyance that a paper with fatal errors was published at all. I commented as follows:

Read between the lines. The paper originally was released in 2009 and was published in 2013 in PLOS-One, which is one step above appearing on Arxiv. PLOS-One publishes some good things (so does Arxiv) but it’s the place people place papers that can’t be placed. We can deduce that the paper was rejected by Science, Nature, various other biology journals, and maybe some political science journals as well.

I’m not saying you shouldn’t criticize the paper in question, but you can’t really demand better from a paper published in a bottom-feeder journal.

Again, just because something’s in a crap journal, doesn’t mean it’s crap; I’ve published lots of papers in unselective, low-prestige outlets. But it’s certainly no surprise if a paper published in a low-grade journal happens to be crap. They publish the things nobody else will touch.

It does make you wonder, though, what can be believed at all! Reading the title and even the abstract is not enough.

P.S. The discussion in the comments section is interesting but on a different topic than I was writing about. I think it’s safe to assume that the paper being discussed above was rejected by the tabloids before appearing in PLOS-One. That’s not a surprise: People submit a lot of weak papers to the tabloids and most of them get rejected. When a weak paper appears in a tabloid, lots of people (including me) get annoyed, and we say things like, How could Science (or Nature or Psych Science) publish that crap! Even non-tabloid serious journals publish seriously flawed papers; in recent months we’ve discussed examples from the American Economic Review and the American Sociological Review. Again, in these cases we can get annoyed that major errors slipped past the review process.

But when a weak paper appears in PLOS-One, we cannot be shocked. PLOS-One by design will regularly publish weak papers along with the good stuff.

Many commenters remarked on the high quality of much of what appears in PLOS-One and urged us all not to restrict our reading to the so-called top journals. I agree completely. Many, maybe most, of my most influential papers over the years have not appear in top journals, but they’re still good papers and represent important research!

But that’s my point. When we encounter crap in leading journals, we know how to react—with annoyance. To the extent we go beyond the top journals, we will still find a lot of crap even when not looking for it (with an extreme case being Dan Kahan’s encounter with the paper described above), and we need a way of thinking about it. Annoyance is not appropriate, and certainly it’s not appropriate to believe the claims in the abstract of a paper, just because it’s published.

To say it again: the point of this post is not to bash PLOS-One or the Journal of Theoretical Biology or other low-ranked but legitimate journals, it’s to explore how we should think about the papers that appear in such places. The traditional approach is to assume that everything published in a peer-reviewed journal is correct, but that’s not right.

46 thoughts on “How to think about papers published in low-grade journals?

  1. PLoS One is quite intensively refereed and generally does NOT publish crap. It wouldn’t surprise me if the proportion of total unreplicable crap there is lower than in Science, which looks for the astonishing (and often gets it).

    What PLoS One does, by design, is to avoid getting into the question of what is Interesting (referees are told that this is not their job). By contrast, Science, Nature etc are all about what is interesting. Which means, what appeals to their staff editors and their editorial board. Sometimes that means favoring things whose importance is unquestionable, but more often it means favoring things that fall in line with recent trends and especially, things which somehow flatter the editorial board and their research tastes.

    The reason PLoS One is growing and the reason it has many papers from excellent labs is because many people have grown weary of trying to meet such an entirely subjective (and I would say, in many cases rather corrupted) standard.

    • Dingo:

      In this case, I think it’s a safe bet that the authors submitted their paper to some subset of Science, Nature, etc., it got rejected, then they sent it to PLOS-One. To repeat what I wrote above: “PLOS-One publishes some good things . . . just because something’s in a crap journal, doesn’t mean it’s crap; I’ve published lots of papers in unselective, low-prestige outlets. . . .”

      Consider the paper discussed above. I’m not saying I think it’s crap because it appeared in PLOS-One. What I’m saying is that Kahan convinced me that it’s crap, and I commented that it should not be a surprise that a crap paper gets published in a journal that accepts everything. It seems fair enough for Kahan to be annoyed when reputable researchers publish a bad paper.

      Again, I publish in unselective journals all the time. I hope my papers will be judged by their content, not where they are published.

      • It’s clear to me Andrew that you’re one of the advanced and enlightened of your generation, but still… of your generation. Your attitude obviously reflects the bias of “which journal makes all the difference” that is the hallmark of people coming from the longstanding traditional academia, and yet you are one of the most advanced academic bloggers, and say things like “I hope my papers will be judged by their content, not where they are published”

        In my opinion the future of academic publishing of articles is much more similar to the current publishing of books. I couldn’t care less who printed your book, Wiley, Cambridge, Addison Wesley… whatever. What I care about is that it has your content and your chosen co-authors content in it. The same will be true of journal articles, as “A V” says below “authors are the obvious source to follow rather than the journals”.

        If you want to call PLoS One “nonselective” that would be fine, but I really object to calling it a “crap journal”

        As an interesting statistics project, it would be very informative to select say 10 articles at random from PLoS One and ten from Nature, review them for quality of scientific process and methodology, and see who has more crap ;-)

        • Daniel, I think you’re overselling your case. This is a Bayesian problem. The probability of a good paper is higher given that it is published in Nature, Science, etc. This can hold even if the probability of bad papers given they are published in Nature, Science, etc. is high.
          I’d like the StackOverflow type model to work but it’s much harder to adjudicate new research than to get consensus around already established results.

        • I don’t know, Kaiser. In my field (perception & cognition), I am not sure that the papers published in Nature and Science over the past 20 years really have been particularly good at all. A few have been, certainly, but most have had little influence and/or have proven unreproducible. The best and most influential work has tended to appear in specialty journals which publish longer articles usually with a series of experiments, not a flashy one-off.

          Maybe in fields like immunology and cell biology these journals really know quality, but in the many disciplines that are further away, I don’t see much sign of this.

        • Ditto for evolutionary biology. With (many exceptions of course) the bulk of the influential work is published in the established society-sponsored journals.

      • Indeed. It’s kind of like Muslims and terrorists. Given a crap publications, it’s more likely published in a less prestigious journal. But my prior on the converse would be a lot weaker.

    • Nature might look for the Astonishing & Sensational, yes. Perhaps.

      But there’s no reason to assume PLoS is any better at conducting the scientific soundness review than Nature. Just because PLoS is editorially non-selective is no reason to think it the papers it accepts are scientifically sounder.

      Nature reviewers are not encouraged to tolerate scientific shoddiness just because someone’s claiming something sensational.

  2. Yeah, see “Dingo Science” above, I STRONGLY object to your characterization of PLoS One as “crap”.

  3. Google Scholar is getting me used to an odd way of reading papers: authors are the obvious source to follow rather than journals, text search easier than quotations… The slot for searching by publication is there, but it is rarely necessary. Browsing a library of journals without covers comes close. Off with the covers gets rid of the historic space of editor’s contribution, unless they cross the line and place something between the covers themselves. The entire hou-ha of journal doings turns into ‘reader beware’, not that I mind. There is allot more to say on the subject, of course…

  4. Andrew:

    I think there are several issues being conflated here. For argument’s sake let us characterize manuscripts in two dimensions: (1) Scientifically sound; and (2) interesting (by some editorial standard, e.g. counter intuitive result, etc…).

    As evidenced by arguments of publication bias, etc… there appears to be a somewhat substantial tradeoff between 1 & 2. Very few, outstanding papers are (True,True) in both dimensions. I would say only a handful in political science every year.

    So-called “Top” journals publish almost all (T,T) manuscripts available, but also a lot of (F,T) manuscripts to go along with them. Typically they also do not consider (T,F) manuscripts.

    PlosOne aims to publish (T,T) and (T,F) but in practice it faces two problems. First, if your manuscript is (T,T) you don’t send it to PlosOne, at least for now. (Perhaps in the future when it builds a reputation, you will.) Second, the “applicant pool” for PlosOne is therefore composed mostly of (T,F) and (F,F). Without good filters this means you end up publishing some crap.

    Even so, I will argue that PlosOne is one of the best scientific journal around. For two reasons:

    First, not all crap is not created equal. In terms of damage to scientific progress I think publishing (F,T) crap, as Nature does, is a lot more destructive than publishing (F,F) crap. The latter is typically easier to spot by the trained eye. The former leads funding agencies, policy makers, and unwary researchers, etc. down a rat alley.

    Second, refusing to consider (T,F) manuscripts as many “top” journals do is not only unscientific but unwarranted in an age of digital publishing without space constraints. A case in point here is AJPS which in its guidelines to manuscripts states:

    The American Journal of Political Science does not review manuscripts that:

    Are unsolicited rejoinders to recently published articles;

    To me this is totally unscientific, and disqualifies AJPS as a scientific journal, period. I don’t care how much (T,T) stuff AJPS publishes, that is not enough to qualify as scientific journal. Replication is key to science, and ruling it out tout court is unscientific. (And no, I am not saying they publish every replication manuscript, these will have to be judged on their own merits, but to not even consider them?)

    In conclusion, I would not call PlosOne a bottom feeder. In my mind it is one of the few truly scientific journals around. And we should (a) be sending our best work to it, and (2) helping them improve filters so the rate of (F,F) is minimized.

    • The flaw in your argument is assuming Nature publishes any more of the (F,T) stuff than PLoS does.

      Even assuming Nature selects in in favor of sensational, there’s no reason at all to believe that Nature does any worse at weeding out scientifically unsound articles than PLoS.

      • Rahul:

        Ultimately that is an empirical question, but (a) let me give you some reasons, and (b) don’t forget the other half of the problem.

        (a) since you are asking for “reasons” I would point to a large literature on incentives suggesting that if you incentivize one item that is cheaper to monitor – e.g. interesting – than another more costly one – e.g. scientific soundness – then editors and reviewers will make a tradeoff in the obvious direction.

        (b) independently of above there a e food reasons to beieve Nature does not publish (T,F). That os very bad not only bc of publication bias and file drawer problem but also -perhaps mainly – what 2-3 people consider “interesting” is totally arbitrary. Akerloff’s market for lemons paper was rejected numerous times in this basis. Yet history shows it to have been a landmark.

        • Your file-drawer problem criticism is valid.

          OTOH, I feel good journals perform an important curating function. With limited time on our hand one has got to be selective about what one reads. Maybe some meta index will evolve that works better.

          Yes, it is the arbitrary taste of 2-3 people. But so are numerous other things in life that we accept.

        • Rahul

          I am not against curating in principle. We have edited volumes and reprints, bloggers, literature reviews, etc. But I also want the raw data.

          And, like others have commented here, I mostly access the literature through search engines, not journals. Moreover, every time I log into my Google Scholar I find a curated list of manuscripts related to my profile generated by Google. I feel I am old enough to judge their quality myself (and assume responsibility for my mistakes).

          More generally I like the idea, common in macroeconomics, of two targets two instruments. One target is scientific soundness. The other is some notion of “interesting”.

          Perhaps the first target is best met by a quality certification program, maybe even one where author pays (think of improved rating agencies). The second target could be lit reviews, or computer generated profiles, or journals. But my money is on computers, not editors.

          Using one instrument to hit two targets often involves tradeoffs, and is typically suboptimal.

          Finally, and in contrats to what you are saying, I think we should publish more, _much much more_, but of better scientific quality and with structured semantics (xml, json, etc). Then parse it with computers. Millions of articles are published every year already, and the trend is up. 2-3 people, even in a narrow field, are most likely lost on the woods.

        • I’d be wary of a rating where author pays. There’s a huge conflict of interest there.

          I’m not too fond of the PLoS practice of asking authors to pay a publication fee as it it. That’s somehow combining the worst of both worlds I feel.

          I am all for, automatic algorithmic curating. I just don’t think we are there yet. e.g. Google Scholar does a great job at searching as well as ordering results by relevance. But does Google Scholar have a good mechanism to rate for quality? I’d love to know if it does.

          PS. For rating web results it works well because of the intensely hyperlinked nature of the web. How does one do reputational rating for publications? Especially in the initial years when citations haven’t accumulated.

        • In almost all certification programs – from organic produce to fair trade – the certified pays the certified.

          The constraint on the certifier is reputation. If I certify anyone then my certificate provides no info, consumers disregard it, and the certified will not want to pay. But details matter and it gets complicated.

          You are paying journals by signign away copyright. Then, bc they have to sell copy, they want “interesting”, so we get (F,T). The current status quo is pretty bad in my opinion. PLoS One is transparent and nonprofit (I think), I’m quite ok with author paying.

        • Texts relate by their content, not only citations or hyperlinks: the words you use place your text in a semantic neighbourhood. It is deadly effective for academic texts.

        • Same difference.

          I am thinking that relevance is a reading tool with potentially interesting rating properties. And review – the exercise of establishing correctness – is also reading.

          The information provided by search is relevance to the reader’s context – a reading tool. If this implies some properties for a ranking aggregating readers’ choices, interesting question, I think.

          Relevance ranking… establishing correctness… Now the second comes first. What of the other way around:

          Rankings and peer review loose the relevance of incorrect things – no complaint there; but are not tools for establishing relevance at all. I don’t think there ever was the choice to start with a good knowledge of the relevance of what one has to say [including what I am reading but neither citing or writing about, next to the more observable / expressive choices to take up a subject in writing and to cite] then establish correctness. After all, peer review is enforced readership, of sorts; and its rules provide a hint that what is observable and relevance [interest] of content does make a difference even if observability is constrained to a semi-random choice of the reviewer’s context. I am intrigued what might come of the correctness / interest trade off mentioned above, if interest were made exhaustively observable via ‘relevance ranking’.

          [back of the envelope]

        • I sure do put my money on NLP versus editorship, because the darn thing comes free of charge & with nifty externalities. For academic writing, these externalities are terra incognita…

          My feeling is that the ‘interesting’ criteria is the reader’s, the ‘sound’ – with its apocryphal professional implications via rankings, belongs to the writer’s side. Interestingly, anyone reads more than they write. I wonder what might come of this competition of standards.

  5. There’s something quite interesting with PLoS One : if you find a flawed paper, you can write directly a comment on the website to argue your point, so that anyone who goes to see the incriminated paper can read your comment as well. Which other journal does that really? Also, I think PLoS One may publish a lot of things, more or less interesting, more or less robust. But I have the impression that the track record of PLoS One for handling retractions seems better than the other journals. At least that the impression I get when I read the blog “retraction watch”.
    I certainly wouldn’t call PLoS One crap, nor a bottom-feeder journal…

  6. In economics, meta-research has found that if you fit probability distributions (or histograms) of article citation rates for a top journal and a mid-ranked journal, you get substantial overlap. That is, a surprisingly large percentage of articles from mid-ranked journals end up cited more often than articles from a top journal (the research was in Oxford Bulletin of Econ & Stats I believe).

    This finding suggests the quality of articles decays slowly as you go down journal ranks, so you should not ignore low-ranked journals.

    What about the lowest-ranked, but still reputable, journals? In economics, those papers are typically “correct but of no interest”. However, sometimes a very weird paper will appear, and then either it is very wrong, or it might herald a breakthrough. So I would keep an eye out for those, just in case there’s a diamond in the rough.

  7. If a tree falls in a forest and there’s no one to hear it…
    The reason I’m not pursuing a career in academia (besides the low pay) is the pervasive “publish or perish” attitude which results in the deluge of poorly-written articles of dubious merit that fill the pages of third tier journals around the world. The flood is sloshed around by endless publications which quite honestly would disappear overnight if our universities didnt have completely arbitrary requirementa to publish constantly, regardless of whether the professor has anyone useful to add. How many papers are published which are simply never read in any detail by more than a dozen like-minded friends? Meanwhile, there are more excellent articles put out each year in most fields than anyone could hope to digest and teach. Of course some very worthy work goes unappreciated by the big names. Like music, that “indie” work should have a forum, but we have web pages and Google to search them. I just don’t personally put pne bit of stock in a journal I’ve never heard of. I’ve read so much truly shoddy work in those rags that I just don’t have the time for it. I’d prefer to maximize my contribution by spending more time learning from the wiser monkeys and inspiring the young ones, while putting out much leas but much better research when I have something of value to add. I wish we had a quarter as many journals full of 200% better work and professors with more time to learn and teach.

  8. Interesting. During the 15 years or so I’ve been using journal
    databases like Medline, in order to search for background literature
    when preparing a presentation, a study, or a publication, I never thought
    of judging an article by the prestige of the journal where it was
    published. If it is an article of sufficient methodological quality
    and it contains findings which I want to consider, why should I bother
    whether it is from a tabloids like Science or Nature, leading (i.e.
    politically influential) journals in the filed, mid- or low prestige
    journals?

  9. Wouldn’t you agree “rejected by [prestigious journal XYZ]” only implies that the said paper is not incredible – as in mindblowing (counter intuitive + true, as someone else said). This doesn’t mean only such papers get published therein, but only that only such papers stand an approximately very high chance of publication.

    But since most papers, by definition, don’t meet this criteria, and all you can deduce about being published in PLoS One is that it has been rejected from XYZ, can we really deduce much? Yeah it’s not “amazing” but you hand me a random page from Nature and I’d also guess it’s not “amazing” in that sense.

  10. >I couldn’t care less who printed your book, Wiley, Cambridge, Addison Wesley…

    This is a very interesting comment which I completely disagree with. I spend a fair amount of money out-of-pocket on scientific research & technical writing & I try to be an informed & very choosy consumer. I’ve found that at least some of the bigger American technical book publishers (e.g. Wiley, Addison-Wesley) are predictably bad at editing & proofreading their titles. British publishers such as Cambridge & Penguin OTOH seem to care about holding what they publish to dramatically higher standards.

    I see typos, unclear language, mismatches between text & graphics, even repetitions of text in American-published books, that I simply do not observe coming out of Cambridge or Oxford Press. My working hypothesis is that many Americans, across all SES, are relatively insensitive to poor writing style, whereas in the UK the elites obviously seem to take their language much more seriously (e.g. “Received Standard” & all that). It probably makes sense to the American publishers not to spend a lot of money on first-rate editing & proofreading, especially as they seem to be printing on ever-cheaper paper as well.

    Too bad for the bibliophiles & connoisseurs among us :/

    • I should probably clarify, I do actually appreciate good editing and clarity of language, and I do think Cambridge for example puts out a lot of high quality stuff relative to some other publisher. If I know nothing about the author I might use this as an indicator, but when I know that it’s Andrew Gelman and colleagues writing the book, I couldn’t care less who is publishing it. The same is true of some other authors I know.

      • See, but that’s a different point. You are absolutely right but that just means we have several indicators that we mentally use to judge a work.

        e.g. I might read a paper because I recognize Andrew Gelman’s name on it. Or I might read it because the authors have an MIT affiliation. So also, a paper in a journal with a good reputation (e.g. Nature, Science, JASA etc.)

        The mistake commentators are making is not recognizing the curating or recommending function a journal plays. If a journal is utterly non selective it plays a great role as a repository, yes, but the sifting function gets pushed out to something else. Which is fine, so long as we recognize it.

        • I’m fine with recognizing that in the past Journal Editors played a curating and recommending function. I think it’s also clear that the days of the “curating” function could relatively easily be over as of today (defined as ensuring a consistent layout and readability standard, and archiving for posterity) at least in certain fields. The recommending function is highly orthogonal to the making available in consistent form type function, and my money is on Natural Language Processing (NLP) and other computational methods for interestingness and relevance discovery. Scientific correctness will be hard to figure out in an automatic way, but honestly there are many fields where Nature and Science put up stuff that is most likely not correct (I’m thinking mainly in areas I know a bit about such as bioinformatics based stuff, neither one publishes much at all in other areas I know a lot about such as engineering mechanics, dynamics, earth sciences etc), so it’s not clear that the current system is all that great, which leaves a not very high bar for alternative methods of peer review to reach, such as post-publication non-anonymous moderated peer review.

    • I agree strongly with Brad. I am British and would like to believe that matters were better done over here, but I’ll leave that on one side.

      Like many other things academic and technical book publishing has changed more in the last few decades than in the previous few centuries. We’ve shifted away from a model in which authors wrote their books by hand and typesetters put them in type and (ideally) copy editors and proof-readers were sharp-eyed specialists, who did one thing well, identify likely small mistakes. (Some decades when typists converted the hand-written copy into something more legible or a few authors mastered typewriters now seem a curiously different phase.)

      The model now is clearly that just about every author produces computer-readable files, so why aren’t standards much higher?

      1. A broader educational shift to less emphasis on grammar, spelling, and punctuation means that even the authors and publishers’ staff often commit or are unaware of many minor errors (especially, but not only, from the point of view of pedants or language conservatives).

      2. An increasingly common model, even from small technical publishers, is that numerous _named_ individuals contribute to any book project. In principle, as with software debugging, the more eyes that look, the more likely it is that errors will be spotted. Also, in principle, being named as responsible should mean that a reputation can be made or marred in public. In practice, this model often produces worse results. One computer publisher I know of has a model of a named technical editor, several named technical reviewers, plus in-house specialists supposedly copy editors, proof-readers, etc., etc., so that about 20 people can be listed as responsible for a short 200-page technical book. But the results can still be dire (on the order of a mistake of some kind per page). Analysis seems all too easy. Most of these individuals lack the expertise or the experience to do a good job, and almost everybody assumes that somebody else should be doing a good job. This can include authors being careless about details but assuming that their work will be checked assiduously by someone else. Often, the technical reviewers are just buddies of the authors, whose main role seemingly is to provide puff for the book that appears on websites or the book’s back cover. In addition, all these people naturally have many other commitments.

      3. The person you really want to work on your technical book would have (no meaning to be attributed to the order here) (a) a feeling for good style (b) familiarity with basic rules of grammar, spelling and punctuation (c) the capacity for minute attention to detail and consistency (d) strong technical understanding of the material. That’s an unusual combination of talents (leading to the team model just described as a natural reaction).

      4. Publishers’ reputations are hard to build up but easy to destroy from within. Many good outfits have been spoiled by takeover by larger firms (leading to problems of motivation or morale) and/or their own success. For several years I acquired numerous excellent books on one or other Unix programs from O’Reilly, a firm that seemed to echo the Unix attitude of doing one thing really well. But they have branched out into too many things, and (for example) most of their recent books under statistics and graphics headings have been mediocre or worse. It seems no accident that in some fields the enduringly good books are self-published (in statistical graphics, Edward Tufte and William S. Cleveland).

  11. Pingback: Somewhere else, part 65 | Freakonometrics

  12. 1. Statisticians should know better than to cherry pick. I assume there is good and bad in each of the journals discussed above. What are the proportions? To answer, we’d need to do better than to cite a few examples that confirm our biases.

    2. I’ve published in PNAS and Psychological Science but my preference is to FIRST send things to PLOS One. If all research was published there, all researchers would be better off. However this is a coordination problem with risk (not a prisoner’s dilemma). There’d be no cost if a *PLOS One or bust* norm were to establish. For now, however, people will use the prestige of a journal as an indicator of a publication’s value. If we all were to publish in PLOS One, though, people wouldn’t be able to be lazy like that.

  13. I have a relative who is a very prominent political scientist. He is sports crazed (that is a technical term) and will not countenance any criticism of college sports, which he of course supports enthusiastically. I like to say that I, in contrast, am a genetic defective, clearly missing the gene that makes men sports crazed and unable to ask for directions when lost. This doesn’t go over well, since he clearly has the gene.

    To the point: one of the AEA’s journals is the Journal of Applied Economics, and when last year they published a piece by some U of Oregon economists showing that grades declined when the football team had a winning season, I sent it to him, and before even reading the piece, he wrote back “is this a good journal?”.

    And therein lies the problem. As several commenters here have noted, what really matters is the seriousness and analytical competence of the article, not where it is published.

    To illustrate: based on a few extended discussions with my relative from my reading of the neuroscience lit on political orientations and brain morphology, the findings of which he does not accept, I decided to do a search of the word “neuroscience” in the American Political Science Review for the last 25 years (1988 to the present) – there were exactly just 7 hits.

    How is this possible given the great interest in neuroscience and political orientation since fMRI became available? Of course, it is the gate-keepers, and reminds me of the Max Plank quip “science advances on funeral at a time”, and another paraphrased, “there is nothing more upsetting that a new idea”.

    So we need to keep this in mind when we make reference to journal ranking as a measure of quality. As a few have commented here, it really is the quality of the piece that matters, and less the journal, especially if the subject is relatively new.

    • This is the problem with social sciences – way too much weight is being placed on the perceived prestige of the journals, which themselves have turned into clubs.

  14. I’ve read a lot of excellent works in PLoS ONE. In some fields (in particular, interdisciplinary fields), some top labs tend to publish their 2nd-tier work there (top tier work went to Nature/Science and PNAS). The review process is relatively faster, and papers there with good quality can attract people’s attention quickly.

    On the other hand, most craps were gone without many views…

  15. Pingback: ¡Por fin ya es viernes! | La Ruta Inconsciente

  16. Andrew, you are not considering that some people may publish on ArXiv and avoid the so-called “high ranking” journals because they are too busy doing research and thinking about problems and do not have the time to deal with the referees, or have absolutely no respect for social science academia (AER, Journal of Finance, etc., where they deem that charlatans with no-skin-in-the-game publish for the job market not science). And they may submit to PLOS-One or other journals because they want to spent more time thinking, less time trying to fit into the discourse.
    In the end, you should mature and consider that the only referee is TIME, not some journal.

    • Nassim:

      Yes, but in this particular case, these authors have published in journals such as Science and Nature, so I suspect that this paper appeared in Plos-One only after being rejected by those others.

      • Andrew, it may also mean the authors don’t need to waste time playing the game anymore. it would be far more rigorous to compute the “real” effect of “higher” vs “lower” journals (in hard science) adjusted by the marketing effect of the name, and check if rejection of a paper by Science means anything.

Comments are closed.