My proposal for JASA: “Journal” = review reports + editors’ recommendations + links to the original paper and updates + post-publication comments

[cat picture]

Whenever they’ve asked me to edit a statistics journal, I say no thank you because I think I can make more of a contribution through this blog. I’ve said no enough times that they’ve stopped asking me. But I’ve had an idea for awhile and now I want to do it.

I think that journals should get out of the publication business and recognize that their goal is curation. My preferred model is that everything gets published on some sort of super-Arxiv, and then the role of an organization such as the Journal of the American Statistical Association is to pick papers to review and to recommend. The “journal” is then the review reports plus the editors’ recommendations plus links to the original paper and any updates plus post-publication comments.

If JASA is interested in going this route, I’m in.

30 thoughts on “My proposal for JASA: “Journal” = review reports + editors’ recommendations + links to the original paper and updates + post-publication comments

  1. I came up with a system called “metrica” that I still fantasize about someone more influential than me championing to reality:

    – Not-for-profit, open-access, and cross-discipline (seeking to serve the entire scientific/academic community of those seeking to communicate either primary empirical findings and/or contributions to theory).

    – To submit a report, all authors must become registered users (involves simply entering/verifying an email address with which a persistent account will be associated).

    – Reports are submitted and experience 3 phases of existence: private review, public review, and published

    – When in private review, reports are only visible to the authors of the report and those whom the authors explicitly invite to review (this is meant to emulate what authors typically do now informally; sending around a manuscript to their colleagues for comment before submission to a journal)

    – When in public review and when published, reports are visible to everyone (registered and unregistered users).

    – Registered users can contribute “critical content” (either long-form reviews or short-form in-line comments) for reports that they can access (i.e. all reports that are published, all reports that are in public review, and all reports under private review to which they have been invited to view). Additionally, short-form in-line comments can be made on long form reviews (ex. User A comments on user B’s review of report X).

    – When critical content is created, the author of this content chooses: (1) whether to be named or quasi-anonymous (the system still tracks that they are author of the content, but other users are unable to discern the author’s identity); (2) whether to permit their content be made public.

    – For critical content, independence of the critical content creator with regards to the report’s authors is quantified by co-authorship network analysis of the critical content creator’s prior work and the prior work of the report authors. This independence measure is visible with the critical content. (One could leverage Mendeley’s API for this, though I haven’t written the network analysis code yet)

    – For critical content, expertise of the critical content creator with regards to the report’s content is quantified by semantic analysis comparing the critical content creator’s prior work (published reports on metrica and elsewhere) to the report on which they are commenting. This expertise measure is visible with the critical content.

    – Report authors choose whether to let critical content be private or, if permitted by the critical content creator, public. When critical content is permitted to be public by all parties, it is made public immediately upon creation. (Ideally, most reports would permit critical content to be public, but I thought that there might be some scenarios that I have failed to imagine under which it would be important to give the report authors the choice to keep private the critical content even when the critical content creators are ok with it being public. If this choice is indeed made available, it should probably be made at the time of submission and apply to all critical content rather than permit the authors to “censor” certain critical content)

    – Registered users can provide quality ratings for any content they can view (published reports, reports under public review, public critical content). I’m thinking a 5-point rating scale (“exceptionally poor”, “below average”, “average”, “above average”, “exceptionally good”), with multiple domains of rating available for reports (I’ve thought of 4 so far: “overall quality”, “theoretical contribution”, “methodology”, and “data analysis”)

    – Statistics associated with the volume and ratings of a user’s contributed content is aggregated and visible publicly, providing metrics of scientific contributions. This includes content that is private and/or quasi-anonymous.

    – Registered users can tag reports as a replication, and reports reaching some criterion number of replication tags contribute to each author’s “replication” statistics (i.e. separate stats on volume and quality of replication attempts will be associated with each author, emphasizing the importance of this oft ignored/suppressed endeavour in science)

    – A given user’s ratings of reports and critical content might reasonably be weighted by the above noted expertise and independence measures in determining the influence of these ratings on paper/author stats.

    – Registered users can explicitly “invite” other users to review a given report. If the invited reviewer eventually creates a review that is subsequently rated as a good quality review (via some critical number of ratings), the inviting user gets a bump to their “curator” reputation.

    – Authors are given the ability to make a report “published” once the report achieves a minimum number of ratings, a minimum mean rated overall quality, and a minimum number of reviews that in turn receive a minimum number of ratings and a minimum mean rated overall quality.

    – During the private review and public review phases, report authors may make changes to the report.

    – Once in “published” state, the report cannot be changed.

    – During all phases a history of changes to a report is viewable to anyone with permission to view the report. Users may subscribe to an rss feed (or notifications via email or some messaging system internal to metrica) of changes to a given document.

    Benefits of the metrica system:

    – efficient & rapid procession through peer review by eliminating need to submit serially to multiple journals until a report is accepted

    – more nuanced and explicit metrics of scientific contributions, including quality of reports (not just number) and thusfar ignored contributions like reviewing and curating

    – possible to highlight trends in hot topics via timeline of readership of reports, referral of reports (one user suggests that another user read a particular report), and ratings of reports

    – reading recommendation for a given user via trends and semantic analysis of that user’s own work (possibly also via manually entered keywords, to capture interests outside the user’s published work; this could alternatively be achieved by simply using their reading history).

    Issues (I’ve thought of so far):

    – While the reputation metrics would become an end unto themselves if this system were dominant, at the outset one would have to find some way to incentivize folks to provide quality content. A big PR campaign will be necessary, the funding of which might be reasonably solicited from research funding bodies, possibly including the paid solicitation of contributions from “big names” in science (though this latter option seems somewhat distasteful and anathema to the non-profit/non-financial spirit of metrica). It would also likely be very helpful to solicit the endorsement of universities and departments upon whose acceptance of the metrica metrics on par with traditional contribution metrics (impact factors, # of pubs, etc) will be critical to assuring potential contributors to metrica that they aren’t wasting their time.

    • Hi Mike

      This is very close to ideas I was ruminating on but haven’t got round to assembling with the welcome clarity you have.

      I think it would be a great basis for a commons-based academic publication system. I know plenty of people in the Commons Transition and P2P movements that this might appeal to — do get in touch if you’d like to explore this. I’m “asimong” everywhere (including gmail.com), easily findable.

    • Feels like this borrows a lot of general concepts from StackOverflow (which I think is a good idea; obviously you can’t model it on StackOverflow 1-1). My only comment would be that systems like this will fail if there is too much friction for participation for users. You’ll end up with a trade-off between wanting to make it easy/frictionless for folks to participate vs putting in lots of steps/controls in an attempt to preemptively ensure quality, etc.

  2. It’s interesting to ask why this type of unbundling hasn’t happened yet. Right now, journals solicit examine, improve, and disseminate and (in doing so) attest to quality of manuscripts, and occasionally provide commentary through published discussions. Any organization could carve out attestation and do only that. But other than the occasional best paper award, this almost never happens.

    Having been on a number of award committees, I suspect that we just aren’t very good at attestation when it stands alone. When we bundle all of these tasks together, we can focus on examining studies and providing suggestions for improvement, which I think we do fairly well. We end up with some attestation in deciding to publish, but I suspect we do it a lot better when we restrict our pool of papers to those that have been submitted, examined and improved–and when we are further restricted by the requirements that we don’t publish too many or too few papers.

    Also, would you really take a job that consists of nothing other than saying “this paper gets a star, and that one doesn’t”? Would it be rewarding? How would you respond to an email saying “my article should get a star” and you haven’t even read it. Would you feel obligated to read every paper arguably in the journal’s domain? What if someone complained that some papers are classed with acceptable but inferior work. Would you start giving some papers 4 stars and others just 3?

    Are you one of the rare faculty who says “grading is the most rewarding part of my job”? Not for me. I just changed my grading scheme for short written assignments to be “full credit or no credit”. By not distinguishing between an 87 and a 92, I save a lot of headaches and time, which I can devote instead to providing substantive comments. This proposal sounds like moving journals the other way, so that they are ALL about whether a paper is an 87 or 92.

    • Rjb:

      Interesting points. Let me address them in order:

      – You ask, “How would you respond to an email saying “my article should get a star” and you haven’t even read it. Would you feel obligated to read every paper arguably in the journal’s domain?” One approach would be to require that authors submit their papers to the journal; we could even keep the current system in which you can only submit to a single journal (this could be verified using some open registry of submissions). So, as journal editor, I only have to evaluate the papers that are submitted.

      – You ask about grades of 87 or 92. Again, one could start with something like the current system, in which the editors are allowed to accept some (approximately) fixed number of submissions each month. As with the current system, the limiting factor is not the supply of “stars” but rather the supply of reviewers and the editors’ time.

      – You say my proposal is all about grading. I didn’t intend to imply this! As I wrote, I’d like the journal to be review reports + editors’ recommendations + links to the original paper and updates + post-publication comments. The review reports and the updated paper are still important steps in this procedure, just as in the current system! I guess I made a mistake by labeling the goal as mere “curation” as this does not include all the active steps of getting review reports and suggesting improvements to the papers.

      – I agree with you that we don’t want to be turning journals into award committees, as these tend to be so political.

      • I think the trick is to get rid of the “submissions” step. So, no emails asking for a star! Everyone just does their research and publishes it on the super-Arxiv. The journals would then actually have to do something that looks a bit more like (but also much more serious) science journalism than academic publishing. Scientists would write to each other to draw attention to their results, but only replicated work would make a real splash. Obviously, the journals would still have power. But their reputations would depend on drawing attention to work that actually contributes.

        Certainly, corruption is possible on this system but, if caught, it would be disastrous for the journal. You’d want journals to keep a critical eye on each other, publishing direct criticism of the work published in other journals.

      • I think a better way of describing the benefit of journals, rather “curation” may be to say that journals provide several services–review, feedback, citation, and dissemination/publication. All of these steps are important, but the final step no longer needs to be provided by a publisher or journal, and, indeed, continuing to couple dissemination with curation and the rest appears to be highly problematic.

        • Brenton:

          Yes, I agree. Also the current system is horribly wasteful because (a) valuable review reports are kept secret, and (b) most of the reviewing effort is spent on the worst papers.

  3. Some of what you propose has been done by the European Geophysical Union with multiple journals. The one I read most is Atmospheric Chemistry and Physics (ACP, http://www.atmos-chem-phys-discuss.net/discussion_papers.html). As I understand it, a manuscript is rapidly (~ days) evaluated by the reviewers (selected by an editor) to see if it is topically relevant and has some meat. Then the manuscript is posted open access on ACP Discussions. Those reviewers have 2 months to post reviews. Any registered user can post a comment (not anonymously). Reviews and comments, and author responses to both are also posted. If the paper is ultimately accepted (usually in revised form), then the paper is ultimately published in ACP. The discussions is closed once the two month formal review period is over, but the discussion is archived.
    The biggest issue I see is that most papers get zero comments other than those of the reviewers selected by the editors. It would also be good if the discussion could continue after the two month period.

    • What a great example! And, all along, I thought I had invented this idea. Now I find out it has already been done. No system is perfect, but this is as close as I can think of. The questions I have are how did they do this? and what prevents it from being implemented in other fields? I’ve always thought it requires an esteemed editorial staff willing to do the work it requires and also willing to abandon a system which they have managed to climb to the top of. Both are tall tasks. But I like most aspects of this model and which it was available in other fields.

      Regarding your last caveat, I wonder why most papers get zero comments. Either the journal is not well regarded or academics are not willing to recognize comments as contributions. I suspect the latter is a major cause of the problem. People list journals they referee for on their resumes and these are somewhat “counted” for promotion, tenure, evaluation, etc. But I suspect they are really “counted” – i.e., if you referee for 5 journals that is “better” than refereeing for 4. Why can’t we evaluate the worth of those referee reports/comments? I’ve seen referee reports that only reflect the ignorance or bias of the referee – and I’ve seen reports that were intellectual contributions in their own right. If we are willing to evaluate the comments with a critical eye, then I think more papers would receive more comments. And the resulting works might be better as well.

      • The official reviews can be anonymous, but the additional ones at EGU are named. I would expect that that is the main reason that there are not many additional reviews. Plus reviewing is a tragedy of the commons, it is important, but nice when others do it, so you probably have to explicitly ask people to write a review and not just wait whether they will.

  4. Nowadays, the primary service that journals provide is that they recruit reviewers for the submitted manuscripts. This has the virtue that even authors from obscure universities can get at least a few knowledgeable people to read and evaluate their manuscripts. This virtue is perhaps counterbalanced by the gate-keeper aspect of journals: manuscripts that are not accepted are never seen by anyone besides the reviewers. But the gate-keeper aspect is rapidly dissolving, because of arXiv, SSRN, and similar platforms.

    I worry that in a system where the journals only review a subset of articles, this reviewer recruitment service will be lost and journals will become even more a force for elitism and “bubble maintenance”.

    I’d like to see any future system provide a reviewer guarantee that every serious manuscript gets reviewed. The challenge is to create incentives for reviewers. Perhaps a reputation system in which high-quality reviews help advance your career would do the trick?

    • Yeah, it’s tough. Right now, reviewing well basically gives you essentially nothing tangible (e.g. discounts on textbooks from the publisher), and the reputational benefits that do exist are pretty limited because of anonymity: nobody except the editor even knows you contributed at all, let alone how useful the review is.

  5. I like this idea and Mike’s additional ones. My question is whether this would mean the end to all traditional journals (so Super-Journals [plural presumably] sit on top of Super-Arxiv[s]) or whether other journals still exists so Super-Journals pick just picks papers from these (and Super-ArXiv). If other journals still exist then presumably people still send to the journals and archive on ArXiv, and then just hope for the extra endorsement from Super-Journals. What do these proposals mean for traditional journals? It doesn’t seem that much of a problem to have the traditional journals still to exist, does it?

  6. What do you think about NAJ Economics? It seems somewhat similar to what you propose, except right now NAJ Ec is a bit bare-bones. But it`s a good start

  7. Since so many people are suggesting something similar, maybe this idea’s time is really close.

    But to work, I guess we have to come together, as one thing that definitely won’t work (at least as far as I can see) is to have lots of little independent systems of this kind.

    How about we find somewhere to be a community dedicated to implementing this, at least for the community of all academics who see themselves as part of an academic commons? Something that has a clear values base.

    Non-hierarchical organisation of this is certainly a challenge, but a challenge worth measuring up to.

    If anyone would like, as I said in a reply above, you’re welcome to find me as “asimong”, also my e-mail address at gmail.

  8. I like this. But how does the journal end up picking papers out of the super-arXiv? That step seems to be vulnerable to the same sorts of incentives and gaming that happen under the current model.

    • Jason:

      There would still be a journal editor exercising his or her professional responsibility, soliciting and reviewing referee reports, etc. I do not imagine that my proposal will solve all problems: if Susan Fiske is the editor and she wants to pick out papers on himmicanes, this will still happen. One difference, though, is that the original referee reports will be open (so that readers will be able to see right away how much scrutiny the original submission received) and there will also be the opportunity for post-publication reviews, which would appear right there along with the original paper.

      I guess there would be some need to monitor the post-publication reviews, to stop spam, trolls, and attempts to manipulate the system via floods of the equivalent of one-star and five-star Amazon reviews. The system wouldn’t run on its own. But the current journal system doesn’t run on its own either. The resources that currently go into typesetting, collecting page charges, etc., could instead be used to pay people to monitor the post-publication review threads.

      • >I guess there would be some need to monitor the post-publication reviews, to stop spam, trolls, and attempts to manipulate the system via floods of the equivalent of one-star and five-star Amazon reviews

        Amazon seems to have raised the commenting bar recently, perhaps to address this exact concern? I noticed that the default presentation seems now to be to show only “verified purchase” reviews.

        In the journal-comment context, why not simply restrict comments to logged-in users (Arxiv, ASA, etc), and furthermore disallow anonymous comments by default?

      • I guess there would be some need to monitor the post-publication reviews, to stop spam, trolls, and attempts to manipulate the system via floods of the equivalent of one-star and five-star Amazon reviews.

        We should be so lucky. A critical problem with post-publication review is the opposite problem: even when the apparatus is in place, people don’t use it.

        For example, the website of the American Economic Association allows people to comment on the articles that appear in most of its journals. The AEA has provided this service for years. But few comments have ever been posted. One of the previous commenters says much the same when he writes that “Some of what you propose has been done by the European Geophysical Union […] The biggest issue I see is that most papers get zero comments.”

        Post-publication review is a good idea. But even when the editors are on board and the website is working, getting people to post their thoughts about published articles may be difficult. Journals might take a step in the right direction by asking particular scholars to provide post-publication comments, much as they now ask particular scholars to provide pre–publication reviews.

        • John:

          This doesn’t bother me so much. If nobody even cares enough about a paper to comment on it, fine. But then once the paper gets publicity, people will hear about it and post their corrections. For example, I never would’ve heard about those papers by Case and Deaton on mortality trends, except that they were in the news, so I looked at them and found some errors. I think it’s actually efficient that people don’t bother reading papers carefully until the papers get some attention.

  9. Andrew: Sure, the idea of centralizing the formal and informal conversations about work is great. I guess what I got stuck on is that the curation process can’t be editors looking at everything in the arXiv and picking. Instead, similar to the submission process, authors are presumably going to try to make editors aware of the paper in some way. And that fight for editorial attention will be fraught in many (all?) the ways submissions currently are. Maybe worse in some ways because it happens after the work is already out, so could be gamed by good PR, etc. I don’t know, I haven’t fully thought it through but I think it’s an interesting part of the model to explore.

  10. This is a great idea and something that will likely come with time. At Authorea, a platform for researchers to write and edit manuscripts online, we have a growing list of integrations with publishers and even preprint servers for direct submission. As this builds and more researchers write their papers online we view such a possibility arising where manuscripts can be winnowed to the proper journal community very easily.

    Also, this super-arXiv was basically my motivation behind the Winnower, a publishing platform I founded a few years ago and which is now part of Authorea.

Leave a Reply to Jason Yamada-Hanff Cancel reply

Your email address will not be published. Required fields are marked *