Skip to content

All Fools in a Circle

A graduate student in psychology writes:

Grants do not fund you unless you have pilot data – and moreover – show some statistically significant finding in your N of 20 or 40 – in essence trying to convince the grant reviewers that there is “something there” worth them providing your lab lots of money to then go conduct your research. This is especially pertinent in neuroimaging research AND in developmental research. I do both, so the problem is 2 fold. Everyone knows neuroimaging data is noisy and that historically there are problems with correcting for multiple comparisons / false positives are off the roof. And everyone knows that any experimental data collected from a child is super super noisy and you may need to throw out up to 50% of your sample due to child simply not following directions, or not staying still in the MRI scanner. So — funders are extremely reluctant to give you 2 milllion dollars to scan 300 kids — unless you show whopping preliminary data. This is even true for training grants – I have to prove that my task “works” in order for it to be deemed as a feasible/realistic/worthwhile project for a graduate student to receive funding for to conduct… and therefore, I have to do half of my study before I even apply for the grant.

That’s terrible, and it brings to mind the Armstrong principle. If this is really happening, something should be done about it immediately.

P.S. Above title comes from this wonderful list.


  1. Clyde Schechter says:

    Certainly if you are applying for a sizeable grant, the NIH requires you submit preliminary data showing the feasibility of your approach and your investigative team’s experience with the methods, instruments, materials, etc. that will be used. There is no specific requirement that the preliminary data include statistically significant findings, although I suppose some reviewers would look for that.

    But there are also small grant funding mechanisms that do not require preliminary data: just a good argument that the approach has promise. These grants have very limited budgets and a short funding period. You can’t keep a lab or shop running on these small grants, but as a way of starting down a new promising path they are serviceable.

    • Alex says:

      The problem is that they want to run fMRI on a large cohort of children. A small grant isn’t going to cut it. Unless you’re suggesting that they apply for a small grant in order to collect some data to turn around and use in a proposal for a large grant… which is saying they need a grant to put themselves in the position that they’re already complaining about.

      • Clyde Schechter says:

        In fact, this is fairly frequently done. You have a promising idea, you get a developmental small grant to demonstrate feasibility and generate some preliminary data, and then you use the preliminary data in support of a new grant proposal to do a full-blown study with it.

        • Alex says:

          So how does the small grant fix the issue (“the problem is 2 fold. Everyone knows neuroimaging data is noisy and that historically there are problems with correcting for multiple comparisons / false positives are off the roof. And everyone knows that any experimental data collected from a child is super super noisy and you may need to throw out up to 50% of your sample due to child simply not following directions, or not staying still in the MRI scanner”)

          or any of the issues that this process probably generates, like p-hacking and so on?

    • Anoneuoid says:

      Certainly if you are applying for a sizeable grant, the NIH requires you submit preliminary data showing the feasibility of your approach and your investigative team’s experience with the methods, instruments, materials, etc. that will be used.

      This is fine, the way to deal with it is to first collaborate on a project with someone doing something similar so you learn all tricks from them.

      There is no specific requirement that the preliminary data include statistically significant findings, although I suppose some reviewers would look for that.

      This would be institutionalized pseudoscience.

  2. Vince S says:

    Not to mention, no one has a really good idea of how to do power calculations with so many multiple comparisons, which means that they are usually based on inflated “voodoo correlations” and studies end up way under-powered.

  3. Anon says:

    As an early career psychologist, I can verify that this grad student’s experience is the absolutely the norm.

    • Anonymous says:

      “As an early career psychologist, I can verify that this grad student’s experience is the absolutely the norm.”

      I am not sure if the following is possible useful for you and/or the graduate student from the post above, but have you thought about it possibly being useful to collaborate with fellow early career researchers, especially those who want to perform research according to relatively high/new/optimal standards?

      1) The site StudySwap ( can bring together (early career) researchers to form small groups of (future) collaborators.

      2) In the following link i tried my best to show why small groups of collaborators might be a good idea for (psychological) science, and for (psychological) scientists themselves:

      I reason small groups of researchers working on a similar topic/theory/phenomenon can also write a grant-proposal together, and submit it as a group. They can shortly describe the possible benefits of collaboration using small groups (e.g. see a few possible benefits described in the link above), their intention of adhering to high/new/optimal practices, and the possible benefits of funders spending their money providing grants to small groups of researchers using such a format and these type of practices.

      • Erin Jonaitis says:

        I’m all for collaboration, but in the case of imaging research, getting lots of small groups of subjects at a variety of sites introduces something like batch effects as you’re using multiple machines to do the imaging. I know that there are some huge cooperative studies in some subfields where imaging data are pooled, so it must be possible to handle this statistically, but I don’t know a lot about how they do it.

        There are career-strategy questions here too – how do T&P committees view this kind of collaborative funding, compared to the traditional PI-and-subordinates model? – but perhaps that is in flux in social science departments these days, as the bad press from Wansink-like cases continues.

  4. In biology it seems this is the norm, worse yet PIs wind up doing the work on their start up funds, withholding most of the data. Once they get the grant their grant goes to funding the next step, not the step the grant was austensibly for, since that was already mostly complete before the grant came in.

    Funding rates are on the order of 10% and funds go to labs that crank out stuff based on grants they got years ago, which means it takes maybe 8-10 years as a starting investigator to really get a decent grant, you can’t just sit around and wait for money to start the work, so you wind up with this system.

    It’s broken and there is no feasible way to fix it for the foreseeable future. It also dramatically favors low quality, high volume work.

    • Adam S says:

      The main alternative is to fund the investigator/group, rather than the individual project. This is done every 4 years for intramural researchers at the NIH, see . It would be harder to implement for extramural funding in a way that would be fair for a large number of reasons, and also lose the benefits that come with having to actually write a research idea into a cohesive and logical grant, and get reviewer feedback on it.

      Another thing that leads to this pattern is that most NIH grants don’t get funded on their first submission, and 6 months to a year has passed before it is resubmitted (and this may go on for several funding cycles). Of course, there has been more work done during that period. It’s a broken system. For readers who aren’t aware of how the NIH process works, here’s a short overview: NIH grants are given to scientists volunteering for a “study section.” A primary reviewer (who reads the whole grant beforehand) ranks the grant on certain criteria and presents the grant to the rest of the study section, then some secondary reviewers (also supposed to have read it entirely) add their comments and scores, followed by a discussion among the whole study section. The initial scores are revised by the primary and secondary reviewers, then the whole study section votes by secret ballot on the final score, which is used to rank the grants. The ranking is then used at a later time by program directors to decide funding, largely based on the percentile rank but with some wiggle room for the grants near the cut-off line. In practice, this means that only a few flaws are needed to sink an otherwise excellent grant, so it prioritizes projects that are “safe” but “innovative” according to current mainline scientific practices and culture. Highly “productive” investigators get ranked better as well, and are also more likely to be invited to be on the study sections. Like begets like.

      NIH R21s grants specifically will not accept any preliminary data, with the idea for funding more innovative exploratory work. This is supposed to address this problem about needing to do the study before you get funded for it. Of course, people are still incentivized to cheat this system as well – if you have done some preliminary work, you will probably be able to write a stronger grant, only now the reviewers can’t see what you have already done. Anecdotally, a colleague of mine was recently on an R21 study section. He noted that some people refused to even review a grant that had preliminary data, whereas others expected at least some preliminary data. R21s are a good idea in theory, but in practice a lot of the same problems are there.

      • Clyde Schechter says:

        As with any system relying on human judgment, particularly one that is highly ramified, there will be errors in application of the criteria, and there will be a tendency for like to beget like.

        The study section volunteers do receive training on the different types of grant funding mechanisms and the criteria to be applied for them. But training only gets you so far. During the section meeting, errors like expecting an R21 to have preliminary data should surface, be discussed, and rectified before scoring. Actually, I think this particular error occurs with fairly high frequency because the reviewers tend to be senior people with ongoing research programs who haven’t actually applied for an R21 in a long time and easily slip into the mistake of applying the more stringent criteria for larger-grants.

        There is also the tendency of like to beget like. But my experience has been that funded projects that fail are far more likely to do so because the investigative team failed to manage the logistics of executing the study than for strictly scientific reasons. So the tendency to prefer funding a team that has demonstrated proficiency in carrying out studies has some rationality behind it.

        The system has a lot of flaws, and the results certainly leave a lot to be desired. But if you try to think about ways to fix it, you’ll see that it’s really hard to fix the existing problems without introducing new ones that could well be worse. That’s my nihilist thought for the day!

        • Actually I have a very simple solution: only let the scores determine half the signal, and an RNG determine the rest. Let the grant committe generate a score, rescale this score to be on the scale 0-1, then generate a uniform 0-1 random number R_i and make the probability of each grant being selected proportional to score + R_i, and then select the grants by draw from the pool with the given probabilities.

          This mixes the idea that humans give useful signal but biased, and that RNGs have no useful signal but are unbiased… the sum of the two should be somewhat useful and somewhat less biased.

    • Torquemada in Training says:

      Time out. Is this the same Daniel Lakeland who wrote on the epidemiology of zombies? And is this the same Andrew Gelman who wrote of zombie researchers?

  5. psyoskeptic says:

    There must be ways to fix this all especially recognizing the fact that grant reviewers are fellow researchers. If we decide to change it then it must be able to be changed. Perhaps a simple clause in the preliminary demonstration assessment saying that statistical significance cannot be used to determine whether the researchers have demonstrated expertise?

  6. I’m puzzled as to why everyone’s upset about this state of affairs and how it can be changed or even what people think is “fair”. Do researchers really expect to get $1M grants on the basis of zero track record and no preliminary results? How would that work? At some point, you have to realize you’re getting graded on more than your potential as a smart person and what sounds like a good idea. At the point you’ve finished a postdoc and are getting into the grant-writing business, you should have preliminary results. And ideally you should be teaming up with someone more senior who can mentor you on the grantwriting process.

    It is challenging to change fields or research direction. For me to do that, I had to go team up with Andrew. And to hire me and Matt and Daniel, he needed free research funds. Ben was working as a lecturer, which is another way to fund your research. I could hardly expect NSF to give me $1M to build something like Stan with zero track record in Bayesian stats and no preliminary results.

    I’d be OK with a system where money was doled out more evenly without such strong reviewing. Then people who wanted to work on Stan could all get a little bit of funding from the NSF to do it. That could work for us. But that’s clearly not the style the U.S. is going for in funding, where all these big center grants and whatnot have the goal of concentrating funding rather than spreading it out.

    Reviews also have another layer of subjectivity beyond all this, which is again arguably good. We don’t want to just fund every crackpot idea out there. But the downside is that we wind up with concentration of power in fields like linguistics, where if you weren’t working in the Chomskyan paradigm, the NSF wouldn’t even consider what you were doing. My first grant proposal was returned with the comment that it was “too European”, which is code for not approved by the the American power axis of linguistics. A later grant proposal was kicked over to CS claiming that because it had a computational model it wasn’t linguistics—CS said it was linguistics since there wasn’t any research in comp sci, so it took over a year for one of the silos to agree to review it; it got a very high rating in linguistics, but the program manager said he still decided not to fund it because it wasn’t linguistics [meaning it wasn’t Chomskyan]). The exact same fight led to the dissolution of our joint Ph.D. program in computational linguistics with Pitt (I was at Carnegie Mellon); this time the fight was about whether my Ph.D. students’ qual paper on German word order counted as linguistics—the linguists dismissed it as “engineering” because there was a computational model. This stuff runs deep.

    Then I got to see the other side. After killing myself for a week reviewing a pile of grant proposals and traveling to D.C., the program manager disregarded the panels’ recommendations and decided to fund some of the well-known applicants whose proposals had been graded as inferior by the panel. I figure if they want to give out money that way, I’m not going to work hard to lend an air of impartiality to the endeavour. I’ve declined every invitation to review for grants since.

    • Andrew says:


      Requiring a track record and preliminary results is fine; I have no problem with that. My problem is if there’s a implicit requirement for “statistical significance” from the pilot data. That’s just stupid. If for example you think you need a study of N=200 to estimate what you are interested in, then an N=20 pilot can be helpful in giving you a sense of what problems might arise in data collection, but the aggregate results from N=20 will be essentially noise, so the only way you’ll get statistical significance is by luck or by cheating—and, either way, what you’re then looking at is some random pattern in the data, you’re pre-committing to finding something that probably won’t show up in a larger study. This is a recipe for, at best, a wasted larger study, or, perhaps more likely, an incentive to cheat in the larger study.

      • Absolutely. I took it as given on the blog here that we don’t want to do low N significance for exactly the reasons you mention.

        We have had some big software development grants, but otherwise have been looking for angles to write research grants to extend Stan. The challenge is to propose something concrete enough that it doesn’t sound like wild speculation, yet not so well thought out that it looks more like engineering than research. This is true of all proposals, which have to sound novel enough to be interesting, but not so off the wall as to seem implausible or out of left field. As Daniel Lakeland pointed out in one of the comments, this makes the whole process more conservative.

      • Andrew,

        I am so p-tuckered out after last weekend’s volley about the p-value. The poor P-tered Value I refer to it.

    • Erin Jonaitis says:

      The thing that’s not awesome about this model of funding allocation is that if you don’t believe results from small-N studies, which is what these pilot studies typically are, it amounts to some cross between funding by coin toss and funding by adverse selection, i.e. the money will go to those who are lucky or p-hacky and not necessarily to those with the best ideas or the best execution. And in the process you incentivize researchers to keep using inferential tools in ways that might not be telling us what we actually want to know.

      An alternative might be something closer to what journals want for preregistered research articles, with pilot data collected solely so that PIs can work out the kinks (can we actually recruit Ss as quick as we think, will people tolerate the intervention, can we export the data and read it, etc) and no statistics reported beyond descriptives.

      • Yes, that’s exactly what would (and apparently does) happen. As I mentioned in the reply to Andrew, I didn’t mean to support small-N studies as a means of selection. And as I said there, the problem is that the smoothing introduced to mitigate the small-N factor (either literally or figuratively because the proposer has little track record or the proposal is very novel) is exactly the drive to conservatism in selecting proposals.

    • Bob, I think the reasons you mention are the kinds of things that people are upset about. The system doesn’t actually score on the merits, and doesn’t take any real scientific risks, whereas it does take a lot of risk that people will waste funds doing more of the same.

      Your line: “the downside is that we wind up with concentration of power in fields like linguistics, where if you weren’t working in the Chomskyan paradigm, the NSF wouldn’t even consider what you were doing”

      I also think the way we fund Biomedical research puts lots of funding into rare high profile diseases like ALS and not much research into common and debilitating problems like say non-healing bone injuries (Millions of people per year) or chronic sinusitis (10% of the population, this was Ioannidis’ first grant application for example, to try *not* giving antibiotics to sinusitis sufferers, which is in fact exactly the right thing to do IMHO)

      Sure it doesn’t make sense to give money out to people who can’t do the research, but when you look at what really happens… this isn’t the result of the current system.

      See above for my suggested simple debiasing technique:

      • I didn’t articulate the other side of this clearly enough. I meant that the whole point of having these panels is to bring in scientists to exercise some judgement. I think everyone can agree that we don’t want the geoscientists to have to be “fair” and support flat-earth applications or biology to have to support creationists. The question is where the line gets drawn. I’ve realized later in my career that if I want to have any sway at all, I need to start exercising that judgement. Let’s say I don’t want to see anyting with hypothesis tests. The best way for me to impact that is just reject every paper I review that has hypothesis tests and vote down any proposal that includes them. I think that’s what psyoskeptic was getting at above in that it has to be a collaboration among the scientists themselves. But now when I look at what happened to me, it was just that—a matter of letting people (in this case program managers and reviewers in various cases) exercise judgement about what kind of linguistics is good to pursue. I think they came to exactly the wrong conclusion. What would be needed to deal with this is some kind of anti-trust enforcement to push back against concentration of power around one group or one particular theory. I have no idea how to even begin doing that. The best I’ve ever come up with is to just go off on your own, collect collaborators and build up.

    • Anoneuoid says:

      The problem is more requiring you give some indication your theory is “correct” before getting funding. Showing you have the capabilities to run the equipment, write some code, etc is the part no one has a problem with.

      I personally know someone about to drop out of a PhD program because they couldn’t “get results” from publicly available microarray data that is (now suspected to be) probably all garbage to begin with. You see one pattern in one paper, then some slightly different method in another paper with a totally different pattern, etc. This discovery doesn’t count as “a result” in some minds.

      • This is a problem with a lot of the machine learning and stats literature, too, which are often based on small-N or very selective data collection/reporting. It’s a huge problem in the machine learning conference acceptance race as the NIPS experiment by Cortes and Lawrence pretty clearly showed.

        I think correctness is too lofty a goal. What criteria do you think should be applied to select among grants? I think NSF and NIH are around 10–20% acceptance rates. This really is competition for limited funds.

        Grant applications are competitive in the sense that the proposers are competing hard. Do you forbid people from showing preliminary results? If not, it’s a prisoner’s dilemma situation and you have to show results to compete against the next prisoner, er, proposer.

        It’s a vexing problem, but I don’t see how we’re going to get around this. To me, the more annoying thing is having to show a bit of results, but you can’t make it look too good or the reviewers will think the problem is already solved. So there’s this sweet spot on the difficulty level of what you can propose—it has to be plausible, but not so easy that it’s already done in the proposal. And you have to get that across in the proposal itself. That leads to more conservativeness. With limited space and limited reviewer attention, it’s hard to appeal to novel arguments.

    • Shravan says:

      An accurate but depressing picture of linguistics. This kind of power playing is common in Germany /Europe too. One strange thing with linguists is that once they take a theoretical position they are unable to say, yeah, maybe i got this story wrong. Once i have a stand, i have to protect my story. You have to stand for an idea to get grants and awards so they can stand you up on the stage and say, he / she did rhis specific thing. An exception is if chomsky abandons a rheory, say in a footnote. Then suddenly everyone changes course. In Delhi i once attended a talk by a linguist who was Chomsky‘s friend or something, and all questions after his talk were „what does Chomsky think of X“?

      • Shravan says:

        Recently I was asked to fill out a form for some kind of university level review of general performance, and the top question was how *many* publications did I have in the last two years? Content irrelevant (they don’t want to see the actual list); I could have published 23 replications of some subject/object relative clause study and still gotten to the same number. The DFG, our NSF, taught me this lesson back in 2011 by rejecting an extension of a funding proposal on the grounds (I got this in writing for use at some future date when I am invited to give a keynote on the publication game) that I had published only 3-4 papers in four years in that project. I was working on a hard problem involving Hindi and involving flying a student every year to India to run the studies. But the DFG clearly signalled to me that they wanted volume. Their official policy is different; on CVs sent to them they allow only five publications to be listed. But the reviewers are counting numbers, and they are the ones that get to decide.

        So that’s what the funding agencies are gonna get now from me, volume. If they allowed me to do one good piece of work a year, I would probably produce better quality research; now, I work hard on every papers but eventually give up and publish it because I have to move on to the next thing. Given that top journals in my field will happily publish some underpowered study with two messy and p-hacked experiments, playing the game is easy; I do try to get much more data than the average paper, so I am still much slower than other labs. It does not help that senior researchers write things like “I have published over 300 papers” on their home page as one of their major achievements, signalling to the next generation that the count is what matters. (How can one produce 300 papers in a lifetime anyway?)

        And some months ago, a senior person in the uni told me that now that I have 1.5 million Euros of research funding, now is the time to get a 2 million Euro ERC grant, because the statistics show that winners usually have a lot of funding already. And they are right. So I’m going to write one! Better that I get the money than someone who burns it up on some pointless low powered experiments.

        • Andrew says:


          That reminds me of a story from when I was an untenured professor. One of my more clueless senior colleagues came by and asked me about my research. I told him about my paper on R-hat, which, even back then, had received a lot of attention. He asked me why, if it was such a great idea, I only written only one paper on it. Shouldn’t there be a whole series of papers on the topic?, he asked.

          It seemed that, to him, the publication process was a goal in itself: the point of a research project was not to solve some external problem but to supply a continuing stream of publication.

          It might be that things would be better now, given the easy availability of citation counts. Sure, crap papers can get thousands of citations too, but at least there’s some measure other than “number of papers” and “approval of the personal friends of the old guys on the promotion committee.”

          Just one thing: Shravan, you ask “How can one produce 300 papers in a lifetime anyway?” It’s possible. I’ve done it. Most of the papers don’t include original data analysis, though. I’d find it difficult to produce 300 papers, each describing the design and analysis of several new experiments. But I guess if someone has a big enough research lab, they could coordinate all that work, over several decades. (Another way to publish hundreds of papers is to just copy and your earlier papers over and over, like this guy did. Perhaps not coincidentally, some of this published work was ridiculous.)

          • Publishing 300 papers each of which is about an experiment taking a PhD student 5 years and flying back and forth to India from Germany… It’s clearly impossible. When I see 300 papers I know that this person is not conducting experiments… If the primary thing being rewarded in science is NOT bothering with experiments…. Well it is depressing, hearing what Shravan writes yet again from another researcher who I respect for his contributions here… It’s depressing. In other words academic science is depressing.

            • Shravan says:

              Related to the speed vs insight tradeoff in science:

              One thing I’d love to know is: how did Feynman work? I read his books, and I know something about his working style: work on a dozen problems at the same time, and always try to apply all methods you know to each one of them. He also memorized many integrals early in life and told his students in Caltech to just practice differentiation and integration with made up equations. This gave him an edge; he could solve equations faster than many other physicists, who couldn’t keep up with his derivations. He was a falsificationist; if the facts didn’t fit the theory, the theory was wrong and he abandoned it.

              Peter Medawar also seems like an interesting guy, and I’d like to have read a book about how he and Feynman did their scientific work.

              As an experimentalist and computational modeler, I think taking four years off to study statistics gave me an edge. It fundamentally changed my lab. I think if I became a better programmer over the next years, that would give me the next big push forward. I’m again going to take some time off from research come winter semester, to acquire some new computational skills. These steps might lead to better quality work, and maybe a point will come that quantity will no longer matter. That’s my hope.

              One constructive thing I did recently was that I offered my services as a reviewer to the European Research Council, which gives out big grants. Unfortunately they made me sign a non-disclosure agreement, so I can’t discuss my experiences there, but before I signed that agreement, I had done some reviewing for the ERC, and my limited experience is that they tend to reward fame and volume in their funding decisions. I know some exceptions though, of a low-volume, high-quality researcher who got funded. So it’s not uniformly bad. My hope is to get some traction in the ERC to help them avoid making bad (IMO) decisions. I have only 13 years left to retirement, so the clock is ticking!

              • Anoneuoid says:

                This gave him an edge; he could solve equations faster than many other physicists, who couldn’t keep up with his derivations. He was a falsificationist; if the facts didn’t fit the theory, the theory was wrong and he abandoned it.

                Perhaps because he could theorize so easily any single theory didn’t mean much to him, so he didn’t end up attaching his identity/career to it.

              • Shravan,

                Do you think that studying statistics gave you an because statistics was emerging as prevalent in AI and machine learning? Or was it a substantive value add.

              • Shravan,

                Sorry, I meant to include the word ‘add’ in my question.

              • ‘edge’ not ‘add’.

                Shouldn’t perform other tasks while typing. Lesson learned.

              • Shravan says:

                Sameera, you wrote

                “Do you think that studying statistics gave you an because statistics was emerging as prevalent in AI and machine learning?”

                No, nothing to do with all that. It’s a long story:

            • Anoneuoid says:

              I recently heard that for some people the number of papers published has become a negative metric when hiring. “Oh you have a publication” *throws resume in trash*. Apparently after enough bad experiences that problem will just resolve itself. A different solution is probably needed regarding taxpayer funding though.

          • Shravan says:

            Yes, I can see that in some disciplines it can be possible to publish 300+ papers in one’s lifetime, especially when one starts early in one’s career (I was a late starter, with an earlier career as a patent translator in Japan; so I have no chance to reach that number). I think Andrew started in science much earlier in life than people like me.

            Also, in the experimentally-based sciences, as Daniel points out, it is simply impossible to produce at a rate fast enough to get there in one’s career. Over a 30 year career, 300 papers would mean 10 a year on average. That is essentially one every month. My (=students’+postdocs+my) papers take about three years to go from the initial conception of the idea to eventual rejection by a journal.

            Right now I am at the end of a hot phase, producing 12 papers a year, but I was on a two year sabbatical funded by the Volkswagen Foundation. My productivity is about to take a nose dive. Teaching regular classes and doing research in a German university (not counting the time to fill out idiotic performance review forms) means maybe 1-4 papers a year at best.

          • Shravan says:

            Andrew, I guess I don’t object per se to producing 300+ papers, but I think it’s better not to put that up on one’s home page as an index of one’s achievement. I looked at your home page and I don’t see it. That’s what I am talking about. Someone at your level doesn’t need to advertise the number of pubs to signal their achievements. But other senior people do. When young scientists see those numbers, it becomes an aspiration, a goal in itself, because they look up to the senior people as role models. Of course, funding agencies encourage this. Heck, my own annual lab budget from the uni depends on the *number* of publications, among other things.

      • Shravan says:

        Bob, just today at Dussman’s (an English-language bookstore in Berlin) I picked up this book: What is Real? It promises to explain the political machinations of Niels Bohr, it seems that he enforced the Copenhagen interpretation of quantum physics through sheer political control (I haven’t read the full book yet obviously): “Buoyed by political expediency, personal attacks and the priorities of the military-industrial complex, the Copenhagen interpretation has enjoyed undue acceptance for nearly a century.” In a sense it’s a relief that it’s not just linguistics. Science is to a great extent a game of political control, the science just gets done as an accidental side-effect.

        One nice thing in the book is that initially Feynman was contemptuous of people questioning the Copenhagen interpretation, but he changed his mind when the facts changed. (I admire Feynman a lot.)

        • Shravan,

          That book sounds intriguing to me.

          • Shravan says:

            The book is pure gold. There is an astonishing discussion there about how Bohm’s alternative derivation of quantum physics was actively suppressed by Niels Bohr’s disciples. Opposing opinions were not allowed. Sounds familiar; in psycholinguistics too, if you have an opposing opinion to some influential person’s view, they will come after you and your students. There is little acceptance for alternative positions. Bob’s experience just repeats itself over and over again.

            Maybe linguistics was inspired by the culture in physics; one could easily write a book on the linguistics wars (actually there is such a book, but it only tells a small part of the story).

            Also, Bohm was stripped of his US citizenship after he was kicked out of the US for being a communist and banished to Brazil, and when he wanted to come back to the US to restart his career in academia, the US told him that he would have to denounce communism publicly. By this time he was no longer a Marxist, but he said it would be unethical to denounce something (even though he no longer believed in it) just to gain material benefit, so he decided stay in the UK, where he was a professor.

            • Shravan,

              Thank you, Sounds fascinating. It speaks to why the sociology of expertise is so important toward understanding what occurs in the bid for prestige, scholarship, & power. I haven’t seen that degree of animus in academia, although I understand it has existed within departments. I have enjoyed reading some of the scholarship about university life done by legal scholars like Richard Posner and Deborah Rhode.

              Thank goodness I was not part of a generation that was a product of the Cold War, the capitalism/marxism debate, nor more generally ensconced in some ideological rut.

              I’ll read it.

              Finally, although my father was not a linguist nor psycholinguist, he was often in discussions with linguists and maybe even psycholinguists. The discussions would get quite technical. He was of a generation that had the luxury of idle time while teaching. That is much harder today.

  7. Thanatos Savehn says:

    Perhaps off topic but the conversation prompted me to see if journals continue to publish mechanistic breast cancer studies based on the MDA‐MB‐435 cell line long after that line was established to actually be melanoma cells from a male. didn’t disappoint. Of the first page of articles (20) none were published before 2018 and the monthly rate appears if anything to be increasing.

    Looking through those I could access revealed funding by reputable agencies and the journals to be of the not particularly sketchy variety. Putting aside the worry that peer review has quickly gone from an indicia of reliability to a “True Science!” sticker retrieved from a box of Cracker Jack, I get the impression that somehow the grant approval process acquired the Sorcerer’s Apprentice problem – once they start pouring money into a problem they can’t stop – no matter how useless (or dangerous) the resulting flood of articles becomes.

    Surely there’s some way to set aside part of the available money for inductions that sprang not from HARKed pilot data but rather from chance observations and the intriguing narrative/model that appeared when they were put together just so.

  8. BenK says:

    There are huge problems in science and technology management. The problem of how to allocate scarce funding to untested hypotheses is a huge one. It isn’t made better by only allocating funding to ‘already tested hypotheses’ effectively, when the funding to do the original experiments is insufficient, and so the evidence is by definition insufficient… its a spiral.

    DARPA’s way out, in effect, is to offer a prize for an answer – but then you push the risk back to the researchers transparently, and the prize needs to be sufficiently large to account for it.

    The current practices are unacceptable and we see the poisoned fruit every day; but we don’t yet have a better solution and it won’t come without changing the cultures (and incentives) of science and technology management in the funding sources.

  9. Joshua Pritikin says:

    Developmental research is always going to be challenging, but the neuroimaging part could get cheaper with new tech on the horizon. A company is working on holographic near infrared imaging. The linked presentations on the events page are quite tantalizing. I hope it turns out to work as well as they claim.

Leave a Reply