Skip to content

What if I were to stop publishing in journals?

In our recent discussion of modes of publication, Joseph Wilson wrote, “The single best reform science can make right now is to decouple publication from career advancement, thereby reducing the number of publications by an order of magnitude and then move to an entirely disjointed, informal, online free-for-all communication system for research results.”

My first thought on this was: Sure, yeah, that makes sense. But then I got to thinking: what would it really mean to decouple publication from career advancement? This is too late for me—I’m middle-aged and have no career advancement in my future—but it got me thinking more carefully about the role of publication in the research process, and this seemed worth a blog (the simplest sort of publication available to me).

However, somewhere between writing the above paragraphs and writing the blog entry, I forgot exactly what I was going to say! I guess I should’ve just typed it all in then. In the old days I just wouldn’t run this post, or I’d postpone it, but my “on deck this week” feature is serving as a bit of a preregistration, so I feel I should write something on the topic even if it’s not as thoughtful or elegant as I was hoping for.

So, in brief . . . I don’t need publication for my career but I like to publish things anyway. Why? Here are some reasons, in no particular order:

1. Getting more readers. Sure, more people read this blog than read a typical journal, but with journal publication I might reach some people who don’t read this blog and don’t follow Arxiv, for example (not that I usually put my papers on Arxiv, but I probably should, as that’s a free way to get yet another stream of readers).

2. Longer-term readership. Most blog hits happen on the very first day, and if someone doesn’t check the blog on the day that some wonderful paper appears, they’re out of luck. A paper in a journal is sitting there for a long time, it gets cited, linked to on Google scholar, etc etc. When I write a paper I’m aiming for lasting impact, not instant attention.

3. That feeling of closure. On another blog, organizational theorist Balazs Kovacs was derided for writing that “The main reason that I love getting a paper published is that then I can close the process and move on to other new and exciting projects.” Setting some of his specific proposals aside, I see where he’s coming from. It’s good to feel that a project is done.

Unfortunately, I don’t think this is such a good motivation (even though I myself feel it). When I look back on various 5-year-old or 10-year-old or 20-year-old projects that I just wanted to get done, just to get the projects out of the way, I realize that the final products are not so wonderful and in retrospect maybe they could’ve been left unpublished.

4. That stamp of approval. The Nuts paper will appear in JMLR. People can now implement the algorithm themselves if they’d like, secure in the knowledge that it’s been vetted by JMLR referees. OK, the vetting is not always so great—as we all know, mistakes do get into the published literature. But from our perspective as the developers of an algorithm, we thought it worth it to put in the effort to get that official status.

5. Wikipedia requires a published reference. This one’s gonna sound pretty silly, but . . . we wanted to put Stan in Wikipedia. But to do this we need a published article on Stan. Not just a published article that uses Stan (believe it or not, these already exist) but an article about Stan. So Bob wrote something for JSS.

6. Playing the game. I don’t need career advancement but students and postdocs do. So it makes sense to publish for their benefit. Also I should have some minimum number of publications that I can mention in progress reports for research grants and visiting positions.

7. Discussions. If you publish a discussion paper in a journal you can get thoughtful comments from leaders in the field. For some examples, go here and search on *with discussion*.

8. Getting on the agenda. Nowadays I can do this one more easily by blogging, but in past years, back when journals were printed and appeared in everybody’s mailbox on a regular basis, the right article at the right time could get a lot of attention. (The goal, of course, is not the personal attention but rather whatever it takes for people to take the time to look at your paper.) Every once in awhile this still happens, for example I think the Girolami and Calderhead paper on RHMC got extra attention for appearing in a journal, compared to just floating around on Arxiv, and a few years back my Struggles with Survey Weighting and Regression Modeling similarly appeared in the right place at the right time.

9. Responses. If we see something published that’s interesting or bothersome, and we have somewhere to go from there (whether it be a criticism, a refutation, or an improvement), we’d like to publish this in the same place, to reach the people who are (or should) be most interested in the topic. Unfortunately this is not always possible, but still I try (although not as hard as some people do).

10. The requirements of publication generally improve a paper. Not always. Sometimes there is awkwardness in writing for the reviewers rather than for the readers. But in most cases we get useful comments from reviewers, and the steps we take to make our papers better can make all the difference.

Are those reasons enough to go through the bother of publication? Sometimes. Often, it seems. Or maybe I still publish mostly out of habit.

P.S. I think some readers missed the point of this post. It was not my intent to defend journal publication or the current system of journals. Rather, I was just trying to lay out, as clearly as possible, my own motivation for continuing to bother submitting papers to journals, given the nontrivial efforts of submissions and revisions and the unclear gains from publication. I do have some reasons for publishing. That doesn’t mean that the current system is so great, I’m just explaining how I operate within it.

P.P.S. More here.


  1. Martin says:

    “I’m middle-aged and have no career advancement in my future”

    No, no, don’t talk like that! If you really, really want to, you can become President of the United States!!

  2. Entsophy says:

    Perfect. Every one of those points is nearly trivial to solve. By laying them out, you’ve just provided some Silicon Valley startup a road map they can use to create the Github of Research.

  3. The issue for me is which of these goals can be satisfied without the pain (mainly due to convervative bias) of the usual publishing route.

    1. I don’t actually look through journals any more, so I never ever ever come across a paper by stumbling over it in a journal. I find out about almost everything by either searching or through word of mouth. Publishing in non-open-access journals will reduce the number of potential readers. I usually just give up if I can’t download a paper because I can never be bothered to fish out my Columbia credentials to find them. And I couldn’t get them when I was outside the univesity system.

    2. Most do, but there’s a very long tail. Especially for popular posts. I’m posting the log of top hits in the past quarter for the LingPipe Blog, which has good Google juice, but probably not as good as this blog’s. Also note that just about everything with a parenthesized date in the title is a review I did of a paper that had already been published — I decided several years ago to pretty much refuse all journal reviewing requests and instead just publish reviews of things I liked enough to be worth improving straight on my blog.

    3. Well, you didn’t even list this as a positive, but doesn’t finishing the paper give you a sense of closure no matter how it’s distributed? The only reason it wouldn’t is that you imagine publishing in a traditional journal itself provides the closure, not finishing the paper. You don’t have to update arXiv papers!

    4. I think the stamp of approval is largely bogus. NUTS is cited all over the place. So is Radford’s work. Think about your own behavior. Do you really trust papers just because they show up in a journal? I tend to trust them when someone I trust recommends them. Or when I read them and understand the good ideas in them. At most, I think the journal “stamp of approval” applies some kind of filtering, but I’m less than convinced it’s a useful filter.

    5. Wikipedia is silly for this, but we can’t change their behavior, so it’s a valid point. But that paper I wrote is now tied up in JSS’s editorial process, where it’s been since December with so far only feedback on form (how to distribute the source to them and license it so that we don’t violate other licenses), and on formatting (names of systems in sans serif, no acronyms in abstracts, title case for journal articles in appendices, etc., none of which I feel has improved the paper).

    6. This is also a valid point until the game changes. So I think it’s the tenured faculty member’s job to break the vicious circle. My own department head when I was coming up for tenure (Teddy Seidenfeld at CMU) told me they’d make the decision based mainly on asking a bunch of people in the field whether I’d made a positive contribution, not on the weight of papers, etc. And they did, because even in the 1980s, I preferred writing code and books to journal articles. My reviews came back with comments like “this work is too European” (I did my Ph.D. in Edinburgh) or “this is not linguistics” (just another form of “this is too European”, because the Chomskyans dominated U.S. linguistic reviewing). The NSF bounced one of my grant proposals between linguistics and computer science for a year, each claiming it wasn’t appropriate for them, but was appropriate for the other (the danger of interdisciplinary work, like computational stats).

    7. I think discussions are great. Conferences are good for this, too, if run the right way. I don’t see any reason why discussions shouldn’t happen on something like arXiv, but right now, there’s no facility for it.

    8. I’d like to see the controlled experiment here. I’d like to think the RHMC work got on the agenda because it was super cool, not because it was in a prestigious journal, but that may just be my own biases talking. I know I didn’t run into it from browsing the article list in the journal — it was in that British journal with “read papers”, right? I don’t even know what that means, but I’m imagining Isaac Newton standing up in a crowd of bewigged gentleman!

    9. I don’t see why publishing helps with getting responses. But like I said earlier, I don’t troll the tables of contents of journals looking for papers anyway. I used to back when paper journals showed up in my mailbox, but I rarely found anything interesting. Columbia stats didn’t even give me a physical mailbox (maybe I have one with a stack of bumph in ISERP, but if so, I’ve never seen it) — I didn’t even notice I didn’t have a mailbox for over a year until I stumbled across the faculty mailboxes in the copier room when Daniel was making some coffee. (I also haven’t used the printer or photocopier or fax machine if the latter even still exist.)

    10. Again, I don’t see that this is the sole prerogative of peer-reviewed publication. Do you realize how many people (like me and one of your own grad students) have gotten feedback from editors that Bayesian stats isn’t acceptable? Mitzi got feedback on a paper published in Science to include p-values for her exploratory data analysis using clustering, because they wouldn’t publish anything with the whiff of stats without a p-value! Pshaw! I say. I just got a set of comments back from the latest paper a colleague wanted to submit, and though it was conditionally accepted, I honestly don’t know what to make of the comments, many of which are impossible to satisfy and others of which run completely contrary to the version I submitted to a conference (the conference paper was Bayesian, but they said I didn’t justify the use of Bayesian stats and should’ve compared to a non-Bayesian approach; so we went non-Bayesian in the paper, and the editors want a comparison on the penalized MLE fits with a Bayesian approach — is this what we should be training people to do in papers — run comparisons with big tables of numbers?).

    P.S. Here’s the stats on post views from the last 3 months:

    Home page / Archives		2,714
    Eclipse IDE for 64-bit Windows and 64-bit Java		2,406
    Using Luke the Lucene Index Browser to develop Search Queries		1,670
    Limits of Floating Point and the Asymmetry of 0 and 1		1,153
    Lucene 2.4 in 60 seconds		838
    Upgrading Java Classes with Backward-Compatible Serialization		552
    New Book: Text Processing in Java		492
    Log Sum of Exponentials		474
    Canceled: Help Build a Watson Clone–Participants Sought for LingPipe Code Camp		470
    Computing Sample Mean and Variance Online in One Pass		399
    Why is C++ So Fast?		347
    Scaling Jaccard Distance for Document Deduplication: Shingling, MinHash and Locality-Sensitive Hashing		308
    Computing Autocorrelations and Autocovariances with Fast Fourier Transforms (using Kiss FFT and Eigen)		306
    Serializing Immutables and Singletons with a Serialization Proxy		294
    JDK 7 Twice as Fast* as JDK 6 for Arrays and Arithmetic		256
    Collapsed Gibbs Sampling for LDA and Bayesian Naive Bayes		228
    Bayesian Counterpart to Fisher Exact Test on Contingency Tables		204
    How to Prevent Overflow and Underflow in Logistic Regression		201
    Bayesian Naive Bayes, aka Dirichlet-Multinomial Classifiers		184
    Lucene 4 Essentials for Text Search and Indexing		184
    Lucene Tutorial updated for Lucene 3.6		169
    Processing Tweets with LingPipe #3: Near duplicate detection and evaluation		166
    Convergence is Relative: SGD vs. Pegasos, LibLinear, SVM^light, and SVM^perf		160
    Apache Lucene 3.0 Tutorial		160
    Arthur & Vassilvitskii (2007) k-means++: The Advantages of Careful Seeding		148
    What’s Wrong with Probability Notation?		143
    Nadeau and Sekine (2007) A Survey of Named Entity Recognition and Classification		140
    Lucene or a Database?   Yes!		136
    ThinkPad W510 Review: First Impressions		134
    LingPipe Home Page		125
    Which Automatic Differentiation Tool for C/C++?		123
    Code Spelunking: Jaro-Winkler String Comparison		110
    Updating and Deleting Documents in Lucene 2.4: LingMed Case Study		109
    What is “Bayesian” Statistical Inference?		101
    Book: Building Search Applications: Lucene, LingPipe and Gate		97
    IBM’s Watson and the State of NLP		96
    The Entropy of English vs. Chinese		95
    Processing Tweets with LingPipe #1: Search and CSV Data Structures		95
    Bayesian Estimators for the Beta-Binomial Model of Batting Ability		94
    Chapelle, Metzler, Zhang, Grinspan (2009) Expected Reciprocal Rank for Graded Relevance		93
    Bayesian Inference for LDA is Intractable		92
    Computational Linguistics Curriculum		91
    Why do you Hate CRFs?		87
    Softmax without Overflow		85
    Generative vs. Discriminative; Bayesian vs. Frequentist		85
    Deleting Values in Welford’s Algorithm for Online Mean and Variance		79
    Upgrading from Beta-Binomial to Logistic Regression		77
    Mean-Field Variational Inference Made Easy		75
    • Entsophy says:

      “I don’t actually look through journals any more, so I never ever ever come across a paper by stumbling over it in a journal.”

      When I was younger and far more naive about science, I went through 3 year phase were I would spend hours every day systematically looking through every article published in journals I thought had a good chance of having something good. In all that effort I never once accidently ran across a good paper. Every good paper I’ve ever encountered was written by someone well known, or referenced in an paper by someone well know.

      “Well, you didn’t even list this as a positive, but doesn’t finishing the paper give you a sense of closure no matter how it’s distributed?”

      It’s worth pointing out that the Greats often hit upon a problem in their early twenties and then kept coming back to it over the following decades (sometimes the following half century). Many of the greatest achievements that we remember and use today are really the culmination of a lifetime spent attacking and re-attacking the same problem until it was a developed into insight made it simple and far reaching.

      If the problems people work on today don’t have that quality, then how significant is that work?

    • Fernando says:

      Bob: “I don’t actually look through journals any more, so I never ever ever come across a paper by stumbling over it in a journal. “


      Bob: “I think it’s the tenured faculty member’s job to break the vicious circle [of publish anything or perish]”


      8. I’d say what gets on my agenda is mostly through social media, specially Twitter and RSS reader. I get journals in the mail (APSR, PA) that I throw in the bin after perusing the table of contents. Its clutter, and I can find the papers online.

      10. Reviews help improve publications. Partly true, but typically at great expense as Bob notes. There must be better ways to do this.

      • Jeremy Fox says:

        Re: how people find papers to read when they’re just looking to “keep up with the literature” or “stumble on something interesting” (as opposed to, e.g., searching for papers on a specific topic), I did a survey of my blog’s readers on this. Not a random sample from any well-defined population, obviously. But for what it’s worth, the broad picture is that ecologists keep up with the literature using a mixture of old-school and new-school methods, with the most popular methods being old-school ones like scanning journal TOCs. And in a surprise to me, students and established profs use quite similar methods:

        I suspect that similar surveys would give different results in different fields, but I really have no idea. And I’m sure things will change over time, though I suspect the timescale of really wholesale change will be measured in decades rather than years (just a guess).

        • Fernando says:

          Yes I scan TOCs but using the journal’s RSS feed in my RSS aggregator — not by perusing the paper journal, or browsing its web page.

          • Jeremy Fox says:

            Ok, but an RSS feed is just a new way of doing the same thing-scanning journal TOCs. I took Bob and some of the other commeters as saying that they do something much different–ignore journal TOCs entirely. Filter the literature by other means that at least in principle are independent of journals.

            Of course, those other means aren’t always independent of journals in practice. For instance, I suspect that papers in leading journals get shared a lot more on social media, blogged about a lot more, and discussed in a lot more face-to-face conversations among friends and colleagues, than papers in obscure specialist journals or mega-journals.

            Which I suppose just means that, however you prefer to filter the literature, you probably are going to have to put a bit of thought and effort into coming up with filtering methods that work for you and achieve whatever it is you’re trying to achieve. Which I think is something most people do. I’m fairly old-fashioned about how I keep up with the literature and find interesting papers to read (though not completely old-fashioned), but that’s not out of habit, it’s because that way of working really does work for me. I’m sure it wouldn’t work for many other people, of course. And if it ever stops working for me (and I think I’ll realize it if/when it does), I’ll change.

            • Fernando says:

              You should try Google Scholar. If you have an account it basically alert you about all new relevant articles for you.

              BTW your survey is a little different than what I think Bib has in mind. You ask something like “if you can’t travel by car or taxi to work do you use public transport?”. Bob asks “how often do u use public transport?”.

              • Jeremy Fox says:

                I do use Google Scholar, as a supplement to looking at journal TOCs and sometimes following up citations I happen across in papers I read or (very occasionally) see discussed on blogs.

                I may perhaps be misinterpreting Bob, I’m not sure. But if so, then I’d suggest that the question I posed is the more interesting one to ask. Like most everyone (I think!), if I want to find papers on a specific topic (say, I’m writing a review of the topic, or I’m planning an experiment on the topic), I do a search. Or some equivalent of a search, like set up a feed or alert that notifies me every time a new paper on topic X is published.

              • Fernando says:


                I was not regering to Google Scholar the search engine but the Google Scholar welcome page for those with Google Scholar profiles. You’ll find a curated list of articles right there.

              • Jeremy Fox says:


                Yes, I know exactly what you mean by Google Scholar recommendations. That’s what I was referring to.

            • Right. I ignore journal’s TOCs altogether. I used to have JMLR in my RSS reader, but when their feed changed, I realized it wasn’t worth the trouble to me to find the new one.

    • Anonymous says:

      I’ve been comparing search engines recently and they now give very different results. Enter “Eclipse IDE for 64-bit Windows and 64-bit Java” (I use Eclipse too) in Google and DudkDuckGo. Your entry is top in the latter but does not even appear in the first page of the former.

  4. Perhaps I’ll find time to comment more later, but for now I’ll just note that I’m curious how universal Bob’s statement is: “I don’t actually look through journals any more, so I never ever ever come across a paper by stumbling over it in a journal.” I sift through the contents of several journals every week, and it’s rare that there’s a week in which I don’t find at least a few articles to be interesting. In fact, what’s now one of the main research directions for my lab came about largely as a result of seeing a neat paper on a new microscopy method, not related to my prior work, and deciding to adopt it. Would I have come across this otherwise? Probably — but perhaps much later, and perhaps just as some shallow blurb on a ‘new result’ on some aggregator site that I would likely ignore.

    Of course, I also wander the stacks at the library looking for random books, so I am perhaps hopeless.

    • Jeremy Fox says:

      “I’m curious how universal Bob’s statement is”

      See my comment above for a bit of data on this. At least among ecologists, Bob would be an outlier.

    • Phil says:

      Every few months, maybe only twice a year, I look at the Table of Contents of a few journals to see if anything looks interesting.

      If I have an idea that I think might be pursuing, or I want to know what work has already been done in a given area, I search (Google Scholar, Scirus, plain ol’ Google, and looking for title words or keywords on Web of Science) to see what is already out there. When I find relevant papers I also look at what they cite.

      And of course I hear about things through word of mouth etc.

      So, yeah, add me to the people who never find papers by browsing journals. (What, never? Well, hardly ever.)

  5. Christian Hennig says:

    “The single best reform science can make right now is to decouple publication from career advancement.”
    I’d be curious how instead research achievements of a young researcher should be measured. (I’m not saying it isn’t possible; and obviously already right now journal publications aren’t the only thing to look at – but still I have the feeling that all ideas I’ve come across up to now that involved axing looking at journal publications altogether lose an important aspect of what is relevant.)

    By the way, I browse journal TOCs, and also in my perception the peer review system has done more good than harm, albeit perhaps by a narrow margin.

    • Entsophy says:

      The value of researchers, like the value of their research, can only be determined in one way: someone makes a judgment about it.

      If that judgment is good, science improves. If it’s bad, science stagnates. There is nothing more to it. There is no gimmick which improves it. There is absolutely no way around this.

      • Christian Hennig says:

        But this is what the peer review system does, isn’t it? And in it one could see a rather healthy element of random variation in who actually makes the judgements, although admittedly it’s not perfectly random, even under the constraint of expertise. (It’s interesting to me whether opponents of the peer review system would rather want to see broader or narrower groups of people making judgements. If the answer is “broader”, how to organize this?)

        • Entsophy says:

          No. Peer review both restricts who is doing the judging and then gives them power to censure work.

          Broader is better. The history of the Royal Societies in Europe is one of them missing major contributions because they had too much control. “Broader” means “greater chance of good stuff slipping through”.

          Or let me say it like this: it’s extraordinarily unlikely that the best reviewer in America has a resume that would get them a position reviewing papers. Open the whole thing up and see what shakes out.

          • Christian Hennig says:

            My experience as Associate Editor is that the problem is not so much to restrict who is doing the judging, but rather to find some reasonably good experts who are willing to do it. I’d be more than happy if a broader selection of people would queue in front of my door to write reviews…
            Not sure whether if everyone puts everything on arxiv and there was a comment section where people could leave reviews freely, the amount of quality reviews would go up.

            • Entsophy says:

              Forget journals, peer reviews, and arxiv. Instead of someone like Andrew conducting peer reviews for some blah blah journal, he could accept manuscripts from people and for the ones that matter to him he writes a commentary together with the unedited manuscript on the blog.

              The author can do whatever they want with the commentary, and if Andrew puts out crap papers with crap commentary then everyone stops reading his blog.

              Such systems have worked in at least two instances. One was in the olden days of science, the other is in the intel community. While there is lots of official dissemination methods in the intelligence community, the best stuff and best analysis often comes via inspired people who perform a function very similar to Mersenne:


              and do it in a way remarkably similar to the way he did.

            • Entsophy says:

              Also, being a good screener of scientific papers is a phenomonally valuable skill. The rewards to anyone who can do this well and consistently are likely to be very much higher than most people expect.

              • Entsophy says:

                Let me back that claim up with some numbers. The film industry in the US sells about $10 billion worth of tickets every year. Ebert the film reviewer was worth about $10 million.

                In 2007 according to the NSF, the US spent north of $350 billion on R&D.

              • Christian Hennig says:

                This all sounds very nice but you’d have to find enough good people who’d be happy to do it.
                This was a bit easier in the old days where access to academia was much more restricted and people were not coming up with thousands of papers every day.
                Not sure whether Andrew would be happy with the workload.

                Anyway, I’m not too much against such ideas. I guess that if science moved in this direction, I would take part and it may well be that I’ll think then that it is an improvement. Currently I do quite a bit of reviewing and screening in the “old” system and have therefore a more positive attitude about it than many on this blog, but I can imagine that what I like about it can be rescued.

              • Fernando says:


                The future is in machine assisted review.

                We should get rid of journals, rely on repositories, _increase_ the number of manuscripts, and process them intelligently.

                There will be problems down the road but that is true with anything.

            • Entsophy says:

              And one more thing Christian: There’ll be a lot fewer papers to review once people aren’t being promoted based off paper counts.

  6. Dale Lehman says:

    I think the original comment needs to be more nuanced – otherwise this devolves into whether or not it is worth doing research at all. We need to decouple academic careers from the quantity of publication – we need (more than ever, I’d argue) to keep it tied to quality of publication. Citations are a much better guide to quality than the number of publications will ever be. For some reason, academics are loathe to actually try to judge the quality of research work. The fact that replication is nearly impossible further feeds this reluctance. It is time that we put a value on replication – we should even place a value on doing work that is capable of replication (for example, making one’s data easily available). It should not be left to journals to require making data available – there is nothing to prevent a researcher from making their data available on their website regardless of whether a journal requires this or not.

    If we made the data available – and if we did interesting research – then the citations would follow. The decoupling that needs to take place is to not penalize an academic who chooses this route. They will certainly end up with fewer publications (since they have made their data available and can’t keep milking it for as many articles as possible) but with more citations (provided their work is interesting).

    • Dan Wright says:

      In the UK the REF (Research excellence framework, formally RAE, has had much discussion about how to evaluate journal articles and other outputs. There is a lot of argument about what this exercise has done, and how it has affected how people publish (and how universities hire). The result of it determines how much money is allocated to the university for research, so it is very important to administrators, and also is important for recruiting. Because citations often have a considerable lag, I think the choice of which outputs to include are more dictated by journal than citations.

    • Entsophy says:

      “I think the original comment needs to be more nuanced”

      That’s my que to make it LESS nuanced.

      Those parts of science which are stagnant and have been for some time, stand no chance unless of improvement without decoupling publications from careers. No chance whatsoever. It’s the absolutely minimal prerequisites.

      The dead weight loss alone from researchers flooding the journals with papers to beef up their totals is surely at least a third of the current US research effort. Which means it’s somewhere over $100 billion research dollars or about five fully funded Manhattan projects a year.

  7. K? O'Rourke says:

    From Bob “tenured faculty member’s job to break the vicious circle”

    Is anyone aware of universities or research institutes that are trying to pioneer such a change?
    (Howard Hughes Medical Institute came to mind, but no idea if true.)

    • Entsophy says:

      Statistics is in a far better position than most fields to make the change for the following reasons:

      -Stat research is cheap.

      -Stat research is valuable.

      -Stat has long history of long history of allowing anyone, from any background, who thinks they got something to make their mark.

      -Stat is many orders of magnitude less status obsessed than fields like Economics.

      -Stat has no one in control the same way the big 5 Econ journals and big 5 Econ grad schools have control in Economics. Stat may be full of fiefdoms, but doesn’t seem to have a dominant fiefdom.

      -Everyone already feels fully qualified to challenge the big shots in statistics in a way you never see in fields like say Physics.

  8. Nony says:

    I don’t think there’s anything wrong with a little pride with the byline either.

    Within 10, I think the strictures of writing a real paper drive more readable arguments, drive better organization, clarity, accuracy. I routinely see half baked analyses and Arxiv papers on the net, which would be improved simply by just meeting basic clarity of writing. People are busy and don’t have time to wander through half-done write-ups.

    • Nony says:

      Within 2, there are also the abstracting services (or am I dating myself)?

    • Anonymous says:

      True, but there is a difference between editorial services to clarify an argument, and bone headed comments about the science. For example: “I don’t like your method, re-write paper and use mine”; “This contradicts my findings, so I am going to shoot it down somehow”; and the catch all “This experiment lacks external validity” to name a few.

      I’d rather pay for an English editor.

      • Nony says:

        My science was pretty data point science, but I was honest and simple about it. And I had no issues. Also, I realize there are times when studies need to be written up because a researcher is moving on and are not perfectly performed. That’s quite fine–we are better off having the information archived and summarized even if the study was imperfectly designed or performed. But just be honest, describe any flaws, don’t claim you invented something (cut the Nature nanohype BS). The reader appreciates the difficulties and would rather have honesty. Evasion or ambiguity or dancing around is just a pain in the ass for everyone.

        And all my first author papers got published without revision, in good specialty journals. This after my advisor (himself an editor) said such candor would never work. (He’d never had or seen a paper published with no changes.)

        Just do decent work, publish the data with all the hair on it, draw a few graphs and have moderate discussion of inferences. Rinse, lather, repeat.

        • Andrew says:


          Your advice seems reasonable, but if you want to get published in Science, Nature, Psychological Science, etc., then I’m guessing it’s a better strategy to hide any complications under the rug and make sure to produce a lot of p-values that are less than 0.5.

        • I really wished things worked like that in fields I’ve been involved with. I was lobbying for just that in my own field (computational linguistics). But that’s not how things go. I was on an editorial board for two years that I don’t think accepted a single paper on first submission. I was constantly arguing that making a hard accept/reject decision based on the submission was better for everyone involved, including the field. Yet my co-editors were inclined to try to make every paper better, and honestly, what paper can’t be made better with more work? So it was a never-ending, soul-draining, rounds of revise-and-resubmit (and I say that as a reviewer, not an author).

          One benefit of computer science being more conference oriented is that the accept/reject decisions are more clear cut and faster to come about. Even so, the big comp sci conferences have moved to rounds of author rebuttals of reviews and then a round of reevaluation. It’s like a never-ending committee meeting. After one conference worth of reviewing with extra rounds of feedback, I’d had enough (as a reviewer, not as an author). And I truly enjoy reading papers and giving people feedback and I truly enjoy getting feedback. I think overall the bar to publication is set too high (particuarly the two out of three reviewers like it or average of three reviewers in the top 20% — that just leads to conservative decisions).

          So I imagine policy varies a lot by field. In history, I hear you need to have a book published, so it’s the editors at publishing houses making the decisions! Speaking of fields, what’s “pretty data point science”? Was “pretty” just a qualifier, as in “pretty good”? Even so, what’s a “data point science”?

          • Fernando says:

            Bob: “particuarly the two out of three reviewers like it or average of three reviewers in the top 20% — that just leads to conservative decisions”

            Actually what passes for conservative can actually be pretty risky.

            A quick analogy is to pensions. Many people I know are risk averse and put their savings in the bank. The upshot is that are at great risk of having insufficient funds upon retirement.

            The principle is more general. When I worked in a large financial institution I saw how individually conservative decisions ended up exposing the institution to a lot of risk.

            I think something like this also has a role in publication bias.

  9. Jordan Anaya says:

    Wikipedia apparently still needs a published reference. I noticed someone removed a link to my GRIMMER preprint because of that. Luckily the preprint status of my discovery didn’t deter this person from extending my work:

Leave a Reply