A recommender system for scientific papers

Jeff Leek writes:

We created a web app that lets people very quickly sort papers on two axes: how interesting it is and how plausible they think it is. We started with covid papers but have plans to expand it to other fields as well.

Seems like an interesting idea, a yelp-style recommender system but with two dimensions.

10 thoughts on “A recommender system for scientific papers

  1. Would be interested to see stats on biased usage, which NIH stats may already suggest. Examples: Do professors from R1s get ranked as more ‘credible’ with similar papers compared to non-R1 professors? Are first/last authors who are women/BIPOC get ranked as ‘dubious’ more often?

  2. i cannot express how much love i have for the entire crew at JHU.

    solid group of individuals. they were the group that reinforced my belief that it’s best to keep your head down and contribute.

    only realised right now that dr leek isn’t technically part of the SMART STATS team, but honestly i can only thank Some Kind Of Superior Omniscient Entity for the great fortune of interacting with them early in my “scientific career”.

    i always thought Princeton-Plainsboro was supposed to be JHU, jmo ofc.

  3. Is this an April fool’s joke that got delayed? You’re supposed to look at a paper’s abstract for 10 seconds before giving a knee-jerk reaction? And this will be helpful how?

    And I don’t understand why it’s called a web app when visiting the website gives me a single page with a link to download an iPhone app. I’m on my laptop with a perfectly good browser. I don’t want another app cluttering my phone. And reading even the abstract of a paper on a phone sounds painful. PDFs do not scale well for small screens!

  4. How could one possible “very quickly” sort a paper based on “how accurate” it is?! (“How accurate,” not “how plausible” is what’s written on the papr web page.) I’m trying to think of something nice to say, but this is really appalling. Maybe I’m misunderstanding what “quickly” means, and the actually mean an hour per paper.

    This email (?) at least says “how plausible they think it is” rather than “how accurate,” which is at least feasible, but no less appalling — a handy way to confirm our biases, not a way of assessing papers.

  5. Adede, Raghu:

    I don’t know; I didn’t look at the proposal in detail. I’m not sure whether a recommender system would be a good way to help more interesting and reasonable papers get more attention, or whether it would be just one more system to be misunderstood and gamed. I guess it could be both! The reason for my general positivity is that I’m guessing that this system would be better than what we have before, which is papers circulating pretty much purely based on their ability to get publicity, whether via extravagant claims or connections to people with popular blogs or whatever!

    I didn’t look at the proposal carefully so I didn’t realize that you’re supposed to judge the paper after only 10 seconds. I agree that this is not much time to make a judgment.

  6. I just checked and this is an iteration of a shiny app created in 2017. Probably they are following the cycle of every startup: MVP (shiny + dropox as backend(!)) and now an app. I found the recommender algorithm interesting, they are probably experimenting on this product. And people complain, this is normal, because there are two types of “clients”: early adopters and the rest.

  7. “surprisingly popular” algo was a(nother) 2D better-crowdsourcing finding:
    “Here we propose the following alternative to a democratic vote: select the answer that is more popular than people predict. We show that this principle yields the best answer under reasonable assumptions about voter behaviour, while the standard ‘most popular’ or ‘most confident’ principles fail under exactly those same assumptions.”
    https://www.princeton.edu/news/2017/02/06/crowd-wisdom-surprisingly-popular-answer-can-trump-ignorance-masses
    Dunno if it’s been implemented in any current paper metric tool. It’s all kinda successive approx algos though, need to wait till enough ‘opinions’ got in

Leave a Reply

Your email address will not be published. Required fields are marked *