A new approach to scientific publishing and reviewing based on distributed research contracts?

Darren Zhu writes:

I’ve been working on a project to improve incentives in science funding and publishing inspired by peer-to-peer cryptoeconomic models. The high-level idea is to use “smart” research contracts mediated by peer-to-peer review networks to replace opaque, centralized peer review today.

Would love to hear your thoughts – I was reminded of your ire for peer review and your proposals for journals to focus on curation rather than publication.

While I’m still working out the mechanisms to rank and reward decentralized reviewers, I’ve started sharing some of my ideas here, with summaries here and here.

I have no idea what to think about all these details, but speaking generally it’s good to see this sort of initiatives being tried. The plan at the above link is pretty detailed, with discussion of “atoms,” “contracts,” and other constructs that are part of the larger design, and it reminded me a bit of online collaboration systems such as Github and Discourse. This in turn got me thinking about Stan.

Zhu’s system appears to be designed to be a replacement for scientific journals. Journals serve several functions, most obviously the function of publication—a way to get research from scientists to readers, many of which themselves will be scientists who can use the research to advance their own ideas—but also endorsement (a paper in a top journal will get more attention and often, like it or not, more trust)—as well as serving the systems of academic hiring and promotion.

But when I saw some of the details of Zhu’s plan, in particular the ideas relating to funding and validation of research, I thought about Stan and other collaborative software projects, where there are lots of ideas floating around, and our ways of funding and evaluating these ideas are kind of awkward. Maybe some sort of distributed reviewing and reward system would be a way to go? I have no idea, and I’m pretty sure that lots of software engineers have thought a lot more about this than I have.

6 thoughts on “A new approach to scientific publishing and reviewing based on distributed research contracts?

  1. To Darren Zhu:

    Admittedly I only gave this a few minutes’ attention, but I find it confusing, and I suggest (i) splitting the discussions of funding and publishing, and (ii) giving clear examples.

    I also have a very hard time seeing how you envision the transition between the current system and your proposed system happening. (This is the fatal flaw of most proposals to fix science.)

    Nonetheless, I think it’s great that you and others are thinking about new approaches!

    Expanding on my confusion: Consider my group’s last three papers (https://pages.uoregon.edu/raghu/papers.html). In your proposed scheme, what would have happened to these? From what I can tell, the idea is that potential readers are “specifying that her funds can be claimed by any one of the Authors (with their identity verified) who uploads a copyright-free draft of the requested paper to IPFS. Other readers interested in that same paper may join as funders who contribute to this PUP. At some accumulated fund value, one of the authors ought to have sufficient incentive to upload the paper.” But this assumes that my potential readers are mostly in the present; that there’s no value to someone 10 years from now, who of course can’t send me funds. Even worse, the people “paying” for my paper have to promise funds for a paper they haven’t seen, based presumably on an abstract or advertisement — this seems even more prone to hype than the current system.

  2. Have to say I find this idea and its description rather incomprehensible although I tried to read some of the info provided in the links in the top post. Incidentally the idea that one might summarize a quite complex plan for reorganizing the entire scientific grant and publication system within a series of Twitter posts seems to me an admission that what is being presented shouldn’t be taken very seriously!

    I’m not convinced that the problem to be addressed in this make-over is very well defined. Quite a lot is made of the amusing observation that some ground-breaking/Nobel prize winning research was initially rejected from some journals. But so what? It got published anyway and had its impact. That seems to me to be a positive characteristic of the publishing system – that published papers ultimately get the recognition and make the impact they deserve.

    There is a corollary to this – that dismal papers likewise get what they deserve – i.e. not much in the way of impact. That relates to point #1 in my list of problems with the way that science is done nowadays, which are:

    1. There’s way too many people “doing science”. A massive industry of dismal science and dismal papers in second-rate journals has developed in the last 20 years. To a considerable extent this isn’t a major problem since a vast rump (perhaps 80% or maybe more) of published stuff can easily be ignored. However to my mind it cheapens the ideal of science as a robust means of acquiring knowledge, makes the enterprise appear cumbersome and inefficient, and has the negative effect that bullshit/ho-hum “science” can be promoted e.g. for public titillation or for political purposes. As a reviewer one simply declines to accept review assignments for dismal journals but it does appear.

    2. Zombie science. This relates to #1. A negative aspect of the current systems is that a grant application in subject X is sent to experts in the subject X to review (after all these experts are ideally suited to assess the value of the application). However, what if field X is moribund and going nowhere? The reviewers who are wedded to the subject think the application is just fine. I have the feeling that some dead-end topics can retain funding for years as a result of a quite benign complicity of a wide group of researchers who assess each other papers and applications and think everything is just fine! One way to deal with this issue is that reviewers from outside the field are brought into the review process, including the views of non-scientists that might be able to recognise that a field isn’t progressing (difficult IMO). A positive practice is that reviewers of applications are completely outside the system (e.g. as a Brit I assess an Italian or US grant application since (a) I’m not directly competing with the applicant for funding and (b) might be more comfortable highlighting fundamental deficiencies in the scientific subject.

    3. I’ll stop there since this post is way too long … could add a few more later on

  3. As with pretty much all “do it with Blockchain” type proposals, this is a solution in search of a problem, facing a problem it doesn’t actually solve. Like, what is the actual benefit here? Something something “smart contracts”? Why’s that better than having someone administer stuff, someone who might actually know what the issues are and what rules and guidelines might work?

    Is the benefit supposed to be decentralisation? Is the issue with academic publications really excess centralisation? As Andrew said in the linked to post:

    > To jump to the punch line: the problem with peer review is with the peers.
    > In short, if an entire group of peers has a misconception, peer review can simply perpetuate error.

    Doesn’t a decentralised system where peers are *all* you have simply double down on this problem? Ever tried shouting into the Reddit void on a woefully inaccurate scientific posting? Ever looked at the Wikipedia moderation wars? Ever seen what the top rated comments on crappy covid preprints are like?

    How will a bad article ever get retracted in this system? Peer review as it is has its faults, but this proposal is worse than the disease.

    • How will a bad article ever get retracted in this system?

      There is never a reason for a paper to be retracted. Sweeping mistakes under the rug can only stop people from learning from it.

      • Regardless of that characterisation (which I disagree with), the point is that a distributed economic process will fail to give the information that a study is completely wrong the same precedence as the original study itself.

Leave a Reply

Your email address will not be published. Required fields are marked *