Has the science public relations machine improved?

The above is not a rhetorical question.

I’ve not ever formally studied science communication, so it’s hard for me to say. I’d be interested in hearing what people have found in this area.

The question came to me because I recently happened to run into this post from 2010, titled “Of psychology research and investment tips”:

A few days after “Dramatic study shows participants are affected by psychological phenomena from the future,” (see here) the British Psychological Society follows up with “Can psychology help combat pseudoscience?.”

Somehow I’m reminded of that bit of financial advice which says, if you want to save some money, your best investment is to pay off your credit card bills.

The background was that notoriously p-hacked ESP study which was uncritically hyped by the British Psychological Society and Freakonomics. (Sample bit from the BPS report: “These reverse effects seem bizarre but they are backed up by some rigorous methodology.” Uh, no.)

Question #1 is whether the British Psychological Society and similar organizations would continue to report this sort of thing without even a hint of skepticism. Unfortunately, last year’s “lucky golf ball” incident suggests that the answer is yes, they would. The trust-it-if-it’s-published attitude doesn’t seem to have gone away.

Question #2 is whether the British Psychological Society and similar organizations would have more self-awareness, to at least consider that they may have been net contributors to the propagation of pseudoscience. I don’t know. There’s been a lot of talk in the past decade or so about science reform, and many committees have been formed, but I don’t know if there has been much self-assessment from scientific societies or media organizations about their roles in the system of misinformation.

Have they done the intellectual equivalent of paying off their credit card bills? Or, to mix metaphors, are they continuing to spin their wheels?

11 thoughts on “Has the science public relations machine improved?

  1. The folks over at “Not Even Wrong” are currently being seriously unhappy about the science public relations machine.

    https://www.math.columbia.edu/~woit/wordpress/?p=13209

    https://www.math.columbia.edu/~woit/wordpress/?p=13229

    In one of those, Scott Aaronson is quoted as saying:

    “I confess: this was the first time I felt visceral anger, rather than mere bemusement, over this wormhole affair. Before, I had implicitly assumed: no one was actually hoodwinked by this. No one really, literally believed that this little 9-qubit simulation opened up a wormhole, or helped prove the holographic nature of the real universe, or anything like that. I was wrong.”

    Wow.

    • This illustrates part of the difficulty in evaluating the trends: the difference in quality between publications is far greater than any time trends. People are upset about the wormhole thing not because someone said something stupid or misleading about quantum mechanics (which is not new and was probably far worse in the past in the era of things like dancing wu-li masters, and Aaronson himself has had to devote a lot more time to quantum computing misinfo in the past) but because it was in *Quanta & NYT* where you expect some of the best science coverage. Even Homer nods, however, and it’s hard to cover cutting-edge stuff without screwing up, so I am not revising my opinion of Quanta/NYT based on this – I expect this sort of screw up occasionally, and as Aaronson might put it in the form of a Umeshism, “if your science news publication isn’t ever publishing new wormwholes, then worry you’re not publishing enough science *news* on the whole”.

      More broadly, I’m not sure. Do we give science journalism/PR credit for improvement if their inputs are better? Like a chef, it’s hard for science journalism to be much better than the starting ingredients. For example, in human genetics, 2010s science journalism/PR is much more likely to be true than they were in the 2000s; but that is because genetics itself moved from the candidate-gene era of ~0% true results to the GWAS era of closer to 100% true results. So, not hard to do better… Psychology reporting has gotten somewhat better, but again, how much of that is any ‘intrinsic’ improvement and how much purely downstream of psychology itself reforming at least a little due to the Replication Crisis?

      As far as science journalism itself goes, it benefits a lot from the academia squeeze forcing out a lot of STEM researchers into other occupations (like science journalism, or at least blogging), from the rise of preprint/open access/pervasive fulltext availability and periodicals grudgingly linking to the paper by default (not just for the focal paper itself but all the references and relevant work, and perhaps more importantly, making it much easier for *readers* to read the paper and call bullshit on it), and the general rise of social media so you can easily dip into sciTwitter to see what the cynics & critics are saying. On the negative side, media itself continues being squeezed, with science journalism no exception. Budgets no longer fit long pieces so easily (note Quanta is a billionaire’s mag, and the NYT is not too dissimilar either). The turn towards identity politics also does the reporting no good, even if it is popular and much cheaper; it is the 0%-fat food of reporting. And the human resources are depleting (Sharon Begley is dead, with no replacement; and who will replace old hands like Carl Zimmer or William Broad?).

      All in all, I don’t see a big overall trend either way. We muddle on.

      • “Like a chef, it’s hard for science journalism to be much better than the starting ingredients. ”

        I don’t see the analogy. Science journalism should be as much about the inputs and processes as about the output, so if the input is weak and the process is difficult, new, exploratory or whatever, that can all be reported and discussed. That’s science, and science journalism should cover that if it’s covering science. It’s OK if every day doesn’t have a ready-for-prime-time breakthrough.

        A good analogy is the electric car / renewable power journalism which, if you follow it on YouTube or elsewhere and look at the whole of the genre, does a pretty good job of covering the nuts and bolts, ferreting out hype and offering substantial criticism.

    • David:

      The “this was the first time” thing is funny, as I’d assumed he’s seen hype before?

      In any case, I think it’s completely reasonable to feel visceral anger every time this sort of thing happens. It’s anger-worthy every time.

      • Yes. This is particularly noteworthy since SA has been on what I consider to be the inadequalty critical side of at least two issues that (IMHO) have hype problems. To a certain extent, that’s why I like quoting him: he can’t be accused of being a knee-jerk critic.

      • I read “this was the first time I felt visceral anger, rather than mere bemusement, over this wormhole affair” as “I didn’t feel anger about the wormhole affair until now” – not as “the wormhole affair is the first time I felt anger”. (Or maybe I’m misunderstanding your comment.)

        • Good call: you did some careful reading there. Yes. He thought the wormhole paper and the ensuing hype was all bemusing silliness until the stuff in the previous paragraph (that I didn’t paste in, i.e. in effect snipped) happened.

          The whole (ongoing!) story is covered in detail on the Not Even Wrong blog, which I highly recommend.

  2. “There’s been a lot of talk in the past decade or so about science reform, and many committees have been formed,“

    Andrew:

    The “talk about reform” in the behavioral sciences you discuss in your “Why did it take so many decades” paper earlier this year is just that. If you scratch the surface of the contemporary dialogue about reform what you’ll find is an awful lot of milling around, hand-wringing, side-stepping, hemming and hawing, and rifling though methodological and statistical drawers. What you won’t find is anything remotely resembling a consensus about the causes of or bankable solutions to either the so-called replication crisis or the null hypothesis significance testing controversy. And for a chilling look at the yield of contemporary multi-site projects consider this summary assessment:

    “We freely concede that it is possible to take this record as a sign that social psychology, as practiced for the past half century, has been an exercise in futility marked by dubious theories built around false-positive findings.” (Baumeister, Tice, & Bushman, 2022, p. 19).

    Baumeister, R., Tice, D., & Bushman, B. (2022). A review of multi-site replication projects in social psychology – Is it viable to sustain any confidence in social psychology’s knowledge base. Perspectives on Psychological Science. https://1drv.ms/b/s!AiU3z2ipXdAWitVXbrb7DSV9GSnfOQ

    Nor is there any firm foundation for consensus about reform because the inherently statistical nature of the behavioral sciences methodological discourse is almost completely untethered to the concrete, ground-level realities of psychological and behavioral phenomena. Long ago seduced by the sirens of statisticians, the behavioral sciences now find themselves hopelessly adrift in a sea of numbers, lost in the labyrinth of obscure, self-referential statistical assumptions, struggling to stay afloat in a rising tide of uninterpretable correlations, swept by the shifting winds of statistical fashion far beyond the sight of theoretical land. And where are the voices of reason from the statistical community that helped them into this mess? The statistical consultants on whom behavioral scientists are now completely dependent; the statisticians who advise on the rigors of sampling, imputation of missing values, and the fine points of statistical modeling, while remaining silent about the implausible, baseless, demonstrably false assumptions on which the entire enterprise rests? The statisticians who, as you said of the British Psychological Society, “have more self-awareness, to at least consider that they may have been net contributors to the propagation of pseudoscience.”?

    John

Leave a Reply

Your email address will not be published. Required fields are marked *