I have come across Cureus during my debunking of COVID-19 articles, as a journal (affiliated with Springer Nature Group) with an “interesting” model. So far, my appreciation of it was that the expedite reviewing it offers is its main weakness and all of the paper that I was sent that were published there were of rather low quality. When we know how long publishing a manuscript can take, it seems like an advantage to obtain faster reviewing, but I have argued in the past that, if anything, expedite reviewing is an issue in particular if the reviewers’ reports are not available.
Anyways, I guess that it is not too much of a problem to provide faster reviewing for case reports which Cureus seems to publish a lot of. But this is more problematic for papers that are not case reports. For instance, a re-analysis of two studies published on Cureus about Ivermectin highlights that: “Untreated statistical artefacts and methodological errors alone lead to dramatic apparent risk reduction associated with Ivermectin use in both studies.” More problematic papers have been found on the “miracle Ivermectin drug” on Cureus and they all somehow have quite damning reports on Pubpeer (see e.g., this thread, or this one).
But what I found interesting and was brought to my attention only today by Thomas Kesteman is the use of the Scholarly Impact Quotient which is, according to Cureus: “SIQ™ is designed to supplement our pre-publication peer review process and allows authors to receive immediate, valuable reader feedback without the politics inherent in most other metrics determining so-called “importance.” We do not consider impact factor to be a reliable or useful metric of individual article importance. Cureus is a signer of DORA – the San Francisco Declaration on Research Assessment – and does not promote the use of journal impact factors.”
I find that quite interesting and I am usually in favour of journals/initiative trying something different. But I am quite afraid that the metric can be easily gamed. For instance, the bad IVM studies all seem to have good scores. In any case, I am not condemning an initiative such as this one at all, just wondering what it will provide and how it will be gamed.
First, I thought Cureus must be spam, because I get so many emails from them… I didn’t even bother to read them.
Second, I’m sorry to say that I have found peer review from journals to be largely unhelpful (and sometimes just flat out incorrect/ignorant).
Peer reviewed journals – It’s a crazy system. The researcher does all the work to come up with the study, obtain the funding, undertake the study, analyze the data, write it up, and then they actually pay someone $4,000 or whatever to take all of their hard work, find some free reviewers and do some minor formatting, and then publish it. What an amazing business model, where the workers pay for the product that they conceived of and made. Why would anyone agree to such a lopsided system? Apparently because there must be a lot of value in that published paper. Papers seem to be the ‘currency’ by which more grants and positions are ‘purchased’. This all seems like a system with perverse incentives. On the one hand, there would seem to be an incentive for journals to publish as many papers as possible, thereby increasing revenue. I think this can be seen with the ‘lower tier’ and ‘predator’ journals. On the other hand, for household name journals, there is the incentive to reject as many papers as possible and filter in only the most ‘exciting’, thereby increasing the value of the journal’s currency. Sort of like controlling inflation. They never lack submissions, and can keep the demand high. For researchers, the unfortunate goal (due to this system) becomes obtaining some of this currency by getting papers out, preferably in the journals with high currency, so that things like grants and positions can be purchased. And the cycle repeats. All of this seems kind of messed up. And for all of the many great, hard working, ethical researchers out there doing good work with a real interest in their topic, they still have to be part of this mess. Is this not how it seems, or am I way too cynical?
Just to clarify I agree with most of what you said. I have actually argued in some (many?) of my own methodological or meta papers that the current system and incentives are not good. So I mostly agree with what you said about journals and the peer review system (not to mention APCs, and the slow correction of science, see one of my published perspective on the latter: https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3001572).
I mostly just wanted to share my curiosity in this SIQ system that I am sure will be games somehow even more than other metrics are.
Yes, the link to to the SIQ system that you share says, “SIQ harnesses the innate “wisdom of the crowd””. It’s hard to think of anything less wise than a crowd. Anyhow, it appears to be a slider scale ranking where registered users answer a variety of components like “clinical importance”, “data analysis”, etc. So I guess the idea is that with a bunch of rankings, then the paper can be rated for quality instead of number of citations? It’s at least a nice thought… but I don’t see how it will accomplish the purported goal. Also, how is it “grounded in statistical power”? What does that mean?
“So I guess the idea is that with a bunch of rankings, then the paper can be rated for quality instead of number of citations? It’s at least a nice thought…” That’s also what I tend to think, but I remain skeptical of the use and possible misuse of the system.
“grounded in statistical power” –> I guess they meant the wisdom of the crowd idea here.
I guess it depends on how you ask the crowd. If everyone gets a vote, then you have what the great German poet Schiller called the government of the stupid. (He was not particularly fond of democracy, as you might have guessed).
But if it is the best arguments from the crowd that count, like the audience joker in a quiz (the quiz taker chooses a volunteer from the audience who claims to know the answer), then I believe the bigger the crowd, the better!
In short, I think it is a question of how the mechanism works that the crowd forms a common will that determines whether the crowd is smart or not.
jd what’s amazing to me is how many people just accept it as good and normal. Like, hell no, I’m not going to participate in that mess, but then there are people who are excited to do so and think it’s a great industry.
I guess I am not experienced enough to make very general comments about motivation. The researchers that I have worked with have a desire to do good research in their field, but in order to do so, one is forced to play the game even if not happy about it, so there is certainly motivation to get papers out. If there are no grants then eventually I don’t get paid either! I’m sure you’re right, and many people accept it as good and normal…
All I can say is that when I think about how this system appears to work, it seems there are inherently conflicts of interest for all parties involved.
Yeah, the “SIQ” is a profoundly underwhelming metric.
There is an abundance of nonsense in scientific publishing. The two peer review models I like are the ones (like Frontiers) that publish the reviewers names with the paper – that seems like a good way to ensure that reviewers take their roles seriously. The other (like eLife) is publishing the reviews and authors’ responses to reviews. Again that ensures that participants take their roles seriously. If reviewing for a good journal, then you’re likely to be reviewing fairly decent papers and that can be rewarding – I put lots of effort into reviewing what i think are decent papers but only accept five or six reviewing jobs a year (even though nowadays I get asked to review maybe 30-40 times a year – there is so much junk out there).
Otherwise this conceit of a supposed mark of quality (the “SIQ”) is pretty rubbish and easily gamed as is pointed out in the top article. Who is this apparent stamp of the quality of the work, actually for? A published paper should garner the interest and respect it deserves from the community of scientists in the field. This is still the way things work in the vast majority of scientific endeavours IMO – it’s only in those rather politicised arenas where short term judgements seem to be required (e.g. covid treatments; climate science) that it would be useful for the non-expert to know whether a paper is any good or not, and it’s not obvious that attempting to short circuit the standard methods of pre- and post-publication peer review and consideration by knowledgeable actors – by contriving some kind of metric, is going to help.
Traditional peer review supplemented by pubpeer-type assessment as well as pre-publication of preprints to ensure rapid dissemination of work of immediate importance actually still seems to work OK, despite the flaws that are often talked about! IMO it works to the extent that everyone (scientists, editors, reviewers) are acting in good faith. I suspect the “SIQ” is likely to encourage bad faith actions.
I also love eLife’s model. Frontiers is emulated by many others such as Open BMJ, BMC, etc…
Re: “IMO it works to the extent that everyone (scientists, editors, reviewers) are acting in good faith.” I agree, the problem is all the incentives pushing in the opposite direction of that IMO.
Great comment. One thing that slightly bugs me about post-publication peer review is how little exposure some of it gets. For example, most people wont be aware of PubPeer comments on a paper unless they have already bought in to the system (installed browser add-on e.g.). Seems surprising to me that newer companies like scite.ai have already teamed up with publishers to add citation-type info to publication webpages, but PubPeer seem to be left out in the cold. Is it because PP wish to remain totally financially independent/uninfluenced by publishers? Just seems a shame that PP content often seems to be preaching to the choir.
One thought I’ve had here (and I’m sure others have too), is that PP content will eventually get sucked up by AI Large Language Models, and so perhaps AI tools will start to give potentially quite interesting feedback on papers.
I did not know about this browser extension/ these browser extensions! Thanks! I’m sure it will come in handy.