Do research articles have to be so one-sided?

It’s standard practice in research articles as well as editorials in scholarly journals to present just one side of an issue. That’s how it’s done! A typical research article looks like this:

“We found X. Yes, we really found X. Here are some alternative explanations for our findings that don’t work. So, yeah, it’s really X, it can’t reasonably be anything else. Also, here’s why all the thickheaded previous researchers didn’t already find X. They were wrong, though, we’re right. It’s X. Indeed, it had to be X all along. X is the only possibility that makes sense. But it’s a discovery, it’s absolutely new. As was said of the music of Beethoven, each note is prospectively unexpected but retrospectively absolutely right. In conclusion: X.”

There also are methods articles, which go like this:

“Method X works. Here’s a real problem where method X works better than anything else out there. Other methods are less accurate or more expensive than X, or both. There are good theoretical reasons why X is better. It might even be optimal under some not-too-unreasonable conditions. Also, here’s why nobody tried X before. They missed it! X is, in retrospect, obviously the right thing to do. Also, though, X is super-clever: it had to be discovered. Here are some more examples where X wins. In conclusion: X.”

Or the template for a review article:

“Here’s a super-important problem which has been studied in many different ways. The way we have studied it is the best. In this article, we also discuss some other approaches which are worse. Our approach looks even better in this contrast. In short, our correct approach both flows naturally from and is a bold departure from everything that came before.”

OK, sometimes we try to do better. We give tentative conclusions, we accept uncertainty, we compare our approach to others on a level playing field, we write a review that doesn’t center on our own work. It happens. But, unless you’re Bob Carpenter, such an even-handed approach doesn’t come naturally, and, as always with this kind of adjustment, there’s always the concern of going too far (“bending over backward”) in the other direction. Recall my criticism of the popular but I think bogus concept of “steelmanning.”

So, yes, we should try to be more balanced, especially when presenting our own results. But the incentives don’t go in that direction, especially when your contributions are out there fighting with lots of ideas that other people are promoting unreservedly. Realistically, often the best we can do is to include Limitations sections in otherwise-positive papers.

One might think that a New England Journal of Medicine editorial could do better, but editorials have the same problem as review articles, which is that the authors will still have an agenda.

Dale Lehman writes in, discussing such an example:

A recent article in the New England Journal of Medicine caught my interest. The authors – a Harvard economist and a McKinsey consultant (properly disclosed their ties) – provide a variety of ways that AI can contribute to health care delivery. I can hardly argue with the potential benefits, and some areas of application are certainly ripe for improvements from AI. However, the review article seems unduly one-sided. Almost all of the impediments to application that they discuss lay the “blame” on health care providers and organizations. No mention is made about the potential errors made by AI algorithms applied in health care. This I found particularly striking since they repeatedly appeal to AI use in business (generally) as a comparison to the relatively slow adoption of AI in health care. When I think of business applications, a common error might be a product recommendation or promotion that was not relevant to a consumer. The costs of such a mistake are generally small – wasted resources, unhappy customers, etc. A mistake made by an AI recommendation system in medicine strikes me as quite a bit more serious (lost customers is not the same thing as lost patients).

To that point, the article cites several AI applications to prediction of sepsis (references 24-27). That is a particular area of application where several AI sepsis-detection algorithms have been developed, tested, and reported on. But the references strike me as cherry-picked. A recent controversy has concerned the Epic model (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8218233/?report=classic) where the company reported results were much better than the attempted replication. Also, there was a major international challenge (PhysioNet: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6964870/) where data was provided from 3 hospital systems, 2 of which provided the training data for the competition and the remaining system was used as the test data. Notably, the algorithms performed much better on the systems for which the training data was provided than for the test data.

My question really concerns the role of the NEJM here. Presumably this article was peer reviewed – or at least reviewed by the editors. Shouldn’t the NEJM be demanding more balanced and comprehensive review articles? It isn’t that the authors of this article say anything that is wrong, but it seems deficient in its coverage of the issues. It would not have been hard to acknowledge that these algorithms may not be ready for use (admittedly, they may outperform existing human models, but that is an area on which there is research and it should be noted in the article). Nor would it be difficult to point out that algorithmic errors and biases in health care may be a more serious matter than in other sectors of the economy.

Interesting. I’m guessing that the authors of the article were coming from the opposite direction, with a feeling that there’s too much conservatism regarding health-care innovation and they wanted to push back against that. (Full disclosure: I’m currently working with a cardiologist to evaluate a machine-learning approach for ECG diagnosis.)

In any case, yes, this is part of a general problem. One thing I like about blogging, as opposed to scholarly writing or journalism, is that in a blog post there’s no expectation or demand or requirement that we come to a strong conclusion. We can let our uncertainty hang out, without some need to try to make “the best possible case” for some point. We may be expected to entertain, but that’s not so horrible!

Evilicious 3: Face the Music

A correspondent forwards me this promotional material that appeared in his inbox:

“Misbeliefs are not just about other people, they are also about our own beliefs.” Indeed.

I wonder if this new book includes the shredder story.

P.S. The book has blurbs from Yuval Harari, Arianna Huffington, and Michael Shermer (the professional skeptic who assures us that he has a haunted radio). This thing where celebrities stick together . . . it’s nuts!

P.P.S. The good news is that there’s already some new material for the eventual sequel. And it’s “preregistered”! What could possibly go wrong?

Philip K. Dick’s character names

The other day I was thinking of some of the wonderful names that Philip K. Dick gave to his characters:
Joe Chip
Glen Runciter
Bob Arctor
Palmer Eldritch
Perky Pat

And, of course, Horselover Fat.

My personal favorite names from these stories are Ragle Gumm from Time out of Joint, and Addison Doug, the main character in an obscure spaceship/time-travel story from 1974.

I feel like it shows a deep confidence to give your characters this sort of name. As names, they’re off, but at the same time they’re just right in context. “Addison Doug,” indeed.

Some authors are good at titles, some are good at last lines, some are good at names. So many books, even great books, have character names that are boring or too cute or just fine, but no more than just fine. To come up with these distinctive names is a high-risk ploy that, when it works, it adds something special to the whole story.

The contrapositive of “Politics and the English Language.” One reason writing is hard:

In his classic essay, “Politics and the English Language,” the political journalist George Orwell drew a connection between cloudy writing and cloudy content.

The basic idea was: if you don’t know what you’re saying, or if you’re trying to say something you don’t really want to say, then one strategy is to write unclearly. Conversely, consistently cloudy writing can be an indication that the writer ultimately doesn’t want to be understood.

In Orwell’s words:

[The English language] becomes ugly and inaccurate because our thoughts are foolish, but the slovenliness of our language makes it easier for us to have foolish thoughts.

He continues:

In our time, political speech and writing are largely the defence of the indefensible. Things like the continuance of British rule in India, the Russian purges and deportations, the dropping of the atom bombs on Japan, can indeed be defended, but only by arguments which are too brutal for most people to face, and which do not square with the professed aims of the political parties. Thus political language has to consist largely of euphemism, question-begging and sheer cloudy vagueness.

A few years ago I posted on this topic, drawing an analogy to cloudy writing in science. To be sure, much of the bad writing in science comes from researchers who have never learned to write clearly. Writing is hard!

But it’s not just that. A key problem with a lot of the bad science that we see featured in PNAS, Ted, NPR, Gladwell, Freakonomics, etc., is that the authors are trying to use statistical analysis and storytelling to do something they can’t do with their science, which is to draw near-certain conclusions from noisy data that can’t support strong conclusions. This leads to tortured constructions such as this from a medical journal:

The pair‐wise results (using paired‐samples t‐test as well as in the mixed model regression adjusted for age, gender and baseline BMI‐SDS) showed significant decrease in BMI‐SDS in the parents–child group both after 3 and 24 months, which indicate that this group of children improved their BMI status (were less overweight/obese) and that this intervention was indeed effective.

However, as we wrote in the results and the discussion, the between group differences in the change in BMI‐SDS were not significant, indicating that there was no difference in change in our outcome in either of the interventions. We discussed, in length, the lack of between‐group difference in the discussion section. We assume that the main reason for the non‐significant difference in the change in BMI‐SDS between the intervention groups (parents–child and parents only) as compared to the control group can be explained by the fact that the control group had also a marginal positive effect on BMI‐SDS . . .

Obv not as bad as political journalists in the 1930s defending Stalin’s purges or whatever; the point is that the author is in the awkward position of trying to use the ambiguities of language to say something while not quite saying it. Which leads to unclear and barely readable writing, not just by accident.

The writing and the statistics have to be cloudy, because if they were clear, the emptiness of the conclusions would be apparent.

The problem

Orwell’s statement, when transposed to writing a technical paper, is that if you attempt to cover the gaps in your reasoning with words, this will typically yield bad writing. Indeed, if you’re covering the gaps in your reasoning with words, you’ll either have bad writing or dishonest writing, or both. In some important way, it’s a good thing that this sort of writing is so hard to follow; otherwise it could be really misleading.

Now let’s flip it around.

Often you will find yourself trying to write an article, and it will be very difficult to write it clearly. You’ll go around and around, and whatever you, your written output will feel like the worst of both worlds: a jargon-filled mess, while at the same time being sloppy and imprecise. Try to make it more readable and it becomes even sloppier and harder to follow at a technical level; try to make it accurate and precise, and it reads like a complicated, uninterpretable set of directions.

You’re stuck. You’re in a bad place. And any direction you take makes the writing worse in some important way.

What’s going on?

It could be this: You’re trying to write something you don’t fully understand, you’re trying to bridge a gap between what you want to say and what is actually justified by your data and analysis . . . and the result is “Orwellian,” in the sense that you’re desperately using words to try to paper over this yawning chasm in your reasoning.

The solution

One way out of this trap is to follow what we could call Orwell’s Contrapositive.

It goes like this: Step back. Pause in whatever writing you’re doing. Pull out a new sheet of paper (or an empty document on the computer) and write, as directly as you can, in two columns. Column 1 is what you want to be able to say (the method is effective, the treatment saves lives, whatever); Column 2 is what is supported by your evidence (the method works better than a particular alternative in a particular setting, fewer people died in the treatment than the control group after adjusting this and that, whatever).

At that point, do the work to pull Column 2 to Column 1, or make concessions to reality to shift Column 1 toward Column 2. Do what it takes to get them to line up.

At this point, you’ve left the bad zone in which you’re trying to say more than you can honestly say. And the writing should then go much smoother.

That’s the contrapositive: if bad writing is a sign of someone trying to say the indefensible, then you can make your writing better by not trying to say the defensible, either by expanding what is legitimately defensible or restricting what you’re trying to say.

Remember the folk theorem of statistical computing: When you have computational problems, often there’s a problem with your model. Orwell’s Contrapositive is a sort of literary analogy to that.

One reason writing is hard

To put it another way: One reason writing is hard is that we use writing to cover the gaps in our reasoning. This is not always a bad thing! On the way to the destination of covering these gaps is the important step of revealing these gaps. We write to understand. Writing has an internal logic that can protect us from (some) errors and gaps—if we let it, by reacting to the warning sign that the writing is unclear.

Blog is adapted to laptops or desktops, not to smartphones or pads.

Sean Manning writes:

People behave differently on the post-2008 Internet than before because most of them are on smartphones or pads not laptops or desktops. For example, its hard to copy and paste blocks of text on a touchscreen, but usually easy to make screenshots, so people move things from one site to another as screenshots. Its hard to jump precisely around a text and type punctuation marks, so its hard to enter bbcode. Its easy to scroll, so sites designed for smartphones often have an infinite scroll. Its easy to pull out a smartphone in breaks from other activities, so people visiting the Internet on a smartphone are often in a hurry. People do more of what their tools encourage (affordances) and less of what their tools discourage.

Good point! I hadn’t thought if it that way, partly I guess because I don’t have a mobile phone or pad, so I do very little interaction with touchscreens.

A few years ago someone contacted me with a proposal to fix up the blog and make it more friendly to mobile devices, but it wasn’t clear to me that these changes would actually work. Or, to put it another way, it seemed that any changes would either be too minor to make a difference, or so major that they wouldn’t work with the sort of content we have here. What I hadn’t thought about was Manning’s point, that the way we write and interact on this blog is in some ways a function of how we interact with it on the computer.

There probably are some ways of making the blog more mobile-friendly, but I guess the real point is that the style of communication we’ve developed here works for this format. Kinda like how some stories work better as movies, some as TV shows, and some as plays. You can transfer from one medium to another but they’re different.

“Not once in the twentieth century . . . has a single politician, actor, athlete, or surgeon emerged as a first-rate novelist, despite the dismayingly huge breadth of experience each profession affords.”

Tom Bissell writes:

Recently, in The Spooky Art, Norman Mailer [wrote that] Not once in the twentieth century . . . has a single politician, actor, athlete, or surgeon emerged as a first-rate novelist, despite the dismayingly huge breadth of experience each profession affords. For better or worse, and I am prepared to admit worse, writers are writers are writers. This explains why so many mediocre fiction writers sound the same, why there exist so many books about writers, and why many talented fiction writers seem to think that their best option to distinguish themselves is to flee the quotidian to explore more fanciful subject matter.

That’s an interesting point. Here in the twenty-first century, novel writing is a niche art and a niche business. In the previous century, though, the novel was a major popular art form, and lots of people were motivated to write them, both for artistic and financial reasons. Great novels were written in the twentieth century by people with all sorts of social backgrounds, high, low, and various steps in between—George Orwell was a police officer!—, but I think Mailer was right, that none of these great novels were written by politicians, actors, athletes, or surgeons. Perhaps the closest candidate is Michael Crichton (not a surgeon but he was trained as a doctor; no great novels but he did write Jurassic Park, which was solid genre fiction). Had his novels not been successful, it seems likely he would’ve just become a doctor, which indicates a bit of selection bias in Mailer’s statement. Jim Bouton authored the literary classic Ball Four, but it’s not a novel and presumably the writing was mostly done by his coauthor, who was a professional writer. OK, I guess my best shots on this are George V. Higgins (author of some arguably-great novels (see also here) and also a practicing lawyer) and Scott Turow (also a practicing lawyer as well as an author of several excellent legal thrillers which, ok, they’re not great novels but they have a lot of strengths, I guess I’d say they’re better than Michael Crichton’s even if they don’t have the originality of someone like Jim Thompson). But “lawyer” is not quite the same category as “politician, actor, athlete, or surgeon”—indeed, a lawyer is already a sort of professional fiction writer.

I dunno, it’s an interesting question. I assume there were a fair number of twentieth-century politicians, actors, athletes, and surgeons who had the capacity to write a great novel, or at least make a fair attempt, but it doesn’t seem to have happened. Maybe it would just have taken too much effort, to the extent that, had they gone all-in to write a great novel or a reasonable attempt, they would’ve just become full-time writers and that what’s we’d remember them as. I’m not sure.

Gore Vidal was a politician (kind of) and wrote some excellent novels, maybe they don’t count as “great” but maybe they do. He’s the closest match I can think of—but maybe not, because he was a writer before going into politics, so he doesn’t really count as a politician emerging as a novelist.

P.S. Bissell’s article also discusses the idea of writers being outsiders, which motivates me to point to these two posts:

There often seems to be an assumption that being in the elite and being an outsider are mutually exclusive qualities, but they’re not.

The insider-outsider perspective

As I wrote in the comments to one of those posts:

Saying someone is an outsider doesn’t convey much information, given that just about anyone can grab that label. However, as an observer of politics (and science), I notice that people sometimes highlight their outsider status, and as a political scientist I find that interesting. For example, what’s interesting about Steven Levitt in Freakonomics is not so much that he thinks of himself as a “rogue” but that he decided to label himself that way. Rather than presenting himself as an informant from the inside, he presented himself as an outsider. He had the choice of taking either tack, and he decided on the outsider label. That’s interesting.

Why would people want economics advice from a “rogue” outsider who thinks that drunk walking is more dangerous than drunk driving, thinks we are assured of 30 years of global cooling, and believes that beautiful parents are 36% more likely to have girls? Wouldn’t you prefer economics advice from an insider, someone with a Harvard and MIT education who’s now the William B. Ogden Distinguished Service Professor of Economics at the University of Chicago? That’s what baffles me.

The outsider-novelist thing is more clear, in that different authors offer different perspectives. We read Jack London for one thing and Jane Austen for another.

Michael Lewis.

I just read this interesting review by Patrick Redford of the new book by journalist Michael Lewis on Sam Bankman-Fried, the notorious crypto fraudster.

We discussed earlier how the news media, including those such as Michael Lewis and Tyler Cowen who play the roles of skeptics in the media ecosystem, were not just reporters of the crypto fraud; they also played an important part in promoting and sustaining the bubble. As I wrote when all this came out, the infrastructure of elite journalism was, I think, crucial to keeping the bubble afloat. Sure, crypto had lots of potential just from rich guys selling to each other and throwing venture capital at it, and suckers watching Alex Jones or whatever investing their life savings, but elite media promotion took it to the next level.

We’ve talked earlier about the Chestertonian principle that extreme skepticism is a form of credulity, an idea that seems particularly relevant to the comedian and political commentator Joe Rogan, whose twin stances of deep skepticism and deep credulity are inextricably intertwined. To be skeptical about the moon landing or the 2020 election requires belief in all sorts of ridiculous theories and discredited evidence. Skepticism and credulity here are not opposites—we’re not talking “horseshoe theory”—; rather, they’re the same thing. Skepticism of the accepted official view that the moon landings actually happened, or that the laws of physics are correct and ghosts don’t exist, or that UFOs are not space aliens, or that Joe Biden won the 2020 election by 7 million votes, is intimately tied to active belief in some wacky theory or unsubstantiated or refuted empirical claim.

I’m not saying that skepticism is always a form of credulity, just that sometimes it is. When I was skeptical of the Freakonomics-endorsed claim that beautiful parents are 36% more likely to have girls, no credulity was required, just some background in sex-ratio statistics and some basic understanding of statistics. Similarly if you want to be skeptical of the claim that UFOs are space aliens etc. There’s ordinary skepticism and credulous skepticism. Ordinary skepticism, though, it easy to come by. Credulous skepticism, by its nature, is a more unstable quantity and requires continuing effort—you have to carefully protect your skeptical beliefs and keep them away from any stray bits of truth that might contaminate them. Which I guess is one reason that people such as Rogan who have the ability to do this with a straight face are so well compensated.

But what about Michael Lewis? Like everybody else, I’m a fan of Moneyball. I haven’t read any of his other books—I guess a lot of his books are about rich financial guys, and, while I know the topic is important, it’s never interested me so much—but then last year he interviewed me for a podcast! I was kinda scared at first—my previous experiences with journalists reporting on scientific controversies have been mixed, and I didn’t want to be walking into a trap—but it worked out just fine. Lewis was straightforward, with no hidden agenda. The podcast worked out just fine. Here’s a link to the podcast, and here’s an article with some background. Perhaps I should’ve been more suspicious given that the podcast is produced by a company founded by plagiarism-defender

Of Lewis’s new book, Redford writes:

A common thematic thread, perhaps the common thread, wending throughout Michael Lewis’s bibliography is the limits of conventional wisdom. His oeuvre is stuffed with stories about the moments at which certain bedrock ideas—ones about finance, or baseball, or electoral politics—crumble under their own contradictions. This is helped along, often, by visionary seers—like Michael Burry, Billy Beane, or John McCain—who put themselves in position to take advantage of those who are either too blinkered or too afraid to see the unfamiliar future taking shape in front of them.

That describes Moneyball pretty accurately, and at first it would seem to fit a podcast called “Against the Rules,” but actually that podcast was all about how there was existing expertise, settled enough to be considered “conventional wisdom,” that was swept aside in a wave of confusion. In particular, Lewis talked about the Stanford covid crew, a group of well-connected iconoclasts in the Billy Beane mode, but he showered them with criticism, not praise. Maybe that podcast worked because he was going against type? I don’t know.

Just speaking in general terms, we shouldn’t ignore the visionary seers—Bill James, imperfect though he may have been, was really on to something, and even his missteps are often interesting. But we can’t assume the off-the-beaten-path thinkers are always right: that way lies Freakonomics-style madness, as here and here.

It’s too bad what happened to Lewis with the Bankman-Fried thing, but I wouldn’t attribute it to a general problem with his take on conventional wisdom. It’s more of an omnipresent risk of journalism, which is to frame a story around a hero, which creates problems if the hero isn’t even a plausible anti-hero. (Recall that an “anti-hero” is not the opposite of a hero; rather, he’s someone who doesn’t look or act like a conventional hero but still is a hero in some sense.)

Scientific publishers busily thwarting science (again)

This post is by Lizzie.

I am working with some colleagues on how statistical methods may affect citation counts. For this, we needed to find some published papers. So this colleague started downloading some. And their university quickly showed up with the following:

Yesterday we received three separate systematic downloading warnings from publishers Taylor & Francis, Wiley and UChicago associating the activity with [… your] office desktop computer’s address. As allowed in our licenses with those publishers, they have already blocked access from that IP address and have asked us to investigate.

Unfortunately, all of those publishers specifically prohibit systematic downloading for any purpose, including legitimate bibliometric or citation analysis.

Isn’t that great? I review for all of these companies for free, in rare cases I pay them to publish my papers, and then they use all that money to do this? Oh, and the university library signed a contract so now they pay someone to send these emails… that’s just great. I know we all know this is a depressing cabal, but this one surprised me.

In other news, this photo is from my (other) colleague’s office, where I am visiting for a couple days.

A gathering of the literary critics: Louis Menand and Thomas Mallon, meet Jeet Heer

Marshall McLuhan: The environment is not visible. It’s information. It’s electronic.

Normal Mailer: Well, nonetheless, nature still exhibits manifestations which defy all methods of collecting information and data. For example, an earthquake may occur, or a tidal wave may come in, or a hurricane may strike. And the information will lag critically behind our ability to control it.

Regular readers will know that I’m a big fan of literary criticism.  See, for example,

“End of novel. Beginning of job.”: That point at which you make the decision to stop thinking and start finishing

Contingency and alternative history (followup here)

Kazin to Birstein to a more general question of how we evaluate people’s character based on traits that might, at least at first glance, appear to be independent of character (followup here)

“Readability” as freedom from the actual sensation of reading

Things that I like that almost nobody else is interested in

Anthony West’s literary essays

I recently came across a book called “Sweet Lechery: Reviews, Essays and Profiles,” by literary journalist Jeet Heer. The “Lechery” in the title is a bit misleading, but, yes, Heer is open about sexual politics. In any case, like the best literary critics, he engages with the literary works and the authors in the context of politics and society. He has some of the overconfidence of youth—the book came out ten years ago, and some of its essays are from ten or more years before that—, and there’s a bunch of obscure Canadian stuff that doesn’t interest me, but overall I found the writing fun and the topics interesting.

One good thing about the book was its breadth of cultural concerns, including genre and non-genre literature, political writing, and comic books, with the latter taken as of interest in themselves, not merely as some sort of cultural symbol.

I also appreciated that he didn’t talk about movies or pop music. I love movies and pop music, but they’re also such quintessential topics for Boomer critics who want to show their common touch. There are enough other places where I can read about how Stevie Wonder and Brian Wilson are geniuses, that Alex Chilton is over- or under-rated, appreciation of obscure records and gritty films from the 1970s, etc.

My comparison point here is Louis Menand’s book on U.S. cold war culture from 1945-1965, which made me wonder how he decided what to leave in and what to leave out. I’m a big fan of Menand—as far as I’m concerned, he can write about whatever he wants to write about—; it was just interesting to consider all the major cultural figures he left out, even while considering the range of characters he included in that book. Heer writes about Philip Roth but also about John Maynard Keynes; he’s not ashamed to write about, and take seriously, high-middlebrow authors such as John Updike and Alice Munro, while also finding time to write thoughtfully about Robert Heinlein and Philip K. Dick. I was less thrilled with his writing about comics, not because of anything he said that struck me as wrong, exactly, but rather because he edged into a boosterish tone, promotion as much as criticism.

Another comparison from the New Yorker stable of writers is Thomas Mallon, who notoriously wrote this:

Screen Shot 2015-06-14 at 12.32.19 PM

Thus displaying his [Mallon’s] ignorance of Barry Malzberg, who has similarities with Mailer both in style and subject matter. I guess that Malzberg was influenced by Mailer.

And, speaking of Mailer, who’s written some good things but I think was way way overrated by literary critics during his lifetime—I’m not talking about sexism here, I just think there were lots of other writers of his time who had just as much to say and could say it better, with more lively characters, better stories, more memorable turns of phrase, etc.—; anyway, even though I’m not the world’s biggest Mailer fan, I did appreciate the following anecdote which appeared, appropriately enough, in an essay by Heer about Canadian icon Marshall McLuhan:

Connoisseurs of Canadian television should track down a 1968 episode of a CBC program called The Summer Way, a highbrow cultural and political show that once featured a half-hour debate about technology between McLuhan and the novelist Norman Mailer. . . .

McLuhan: We live in a time when we have put a man-made satellite environment around the planet. The planet is no longer nature. It’s no longer the external world. It’s now the content of an artwork. Nature has ceased to exist.

Mailer: Well, I think you’re anticipating a century, perhaps.

McLuhan: But when you put a man-made environment around the planet, you have in a sense abolished nature. Nature from now on has to be programmed.

Mailer: Marshall, I think you’re begging a few tremendously serious questions. One of them is that we have not yet put a man-made environment around this planet, totally. We have not abolished nature yet. We may be in the process of abolishing nature forever.

McLuhan: The environment is not visible. It’s information. It’s electronic.

Mailer: Well, nonetheless, nature still exhibits manifestations which defy all methods of collecting information and data. For example, an earthquake may occur, or a tidal wave may come in, or a hurricane may strike. And the information will lag critically behind our ability to control it.

McLuhan: The experience of that event, that disaster, is felt everywhere at once, under a single dateline.

Mailer: But that’s not the same thing as controlling nature, dominating nature, or superseding nature. It’s far from that. Nature still does exist as a protagonist on this planet.

McLuhan: Oh, yes, but it’s like our Victorian mechanical environment. It’s a rear-view mirror image. Every age creates as a utopian image a nostalgic rear-view mirror image of itself, which puts it thoroughly out of touch with the present. The present is the enemy.

That’s great! I love how McLuhan keeps saying these extreme but reasonable-sounding things and then, each time, Mailer brings him down to Earth. Norman Mailer, who built much of a career on bloviating philosophizing, is the voice of reason here. The snippet that I put at the top of this post is my favorite: McLuhan as glib Bitcoin bro, Mailer as the grizzly dad who has to pay the bills and fix the roof after the next climate-induced hurricane.

Heer gets it too, writing:

It’s a measure of McLuhan’s ability to recalibrate the intellectual universe that in this debate, Mailer—a Charlie Sheen–style roughneck with a history of substance abuse, domestic violence, and public mental breakdowns—comes across as the voice of sobriety and sweet reason.

Also, Heer’s a fan of Uncle Woody!

Those annoying people-are-stupid narratives in journalism

Palko writes:

Journalists love people-are-stupid narratives, but, while I believe cognitive dissonance is real, I think the lesson here is not “To an enthusiastically trusting public, his failure only made his gifts seem more real” and is instead that we should all be more skeptical of simplistic and overused pop psychology.

It’s easier for me to just give the link above than to explain all the background. The story is interesting on its own, but here I just wanted to highlight this point that Palko makes. Yes, people can be stupid, but it’s frustrating to see journalists take a story of a lawsuit-slinging celebrity and try to twist it into a conventional pop-psychology narrative.

Torment executioners in Reno, Nevada, keep tormenting us with their publications.

The above figures come from this article which is listed on this Orcid page (with further background here):

Horrifying as all this is, at least from the standpoint of students and faculty at the University of Nevada, not to mention the taxpayers of that state, I actually want to look into a different bizarre corner of the story.

Let me point you to a quote from a recent article in Retraction Watch:

The current editor-in-chief [of the journal that featured the above two images, among with lots lots more] . . . published a statement about the criticism on the journal’s website, where he took full responsibility for the journal’s shortcomings. “While you can argue on the merits, quality, or impact of the work it is all original and we vehemently disagree with anyone who says otherwise,” he wrote.

I don’t think that claim is true. In particular, I don’t think it’s correct to state, vehemently or otherwise, that the work published in that journal is “all original.” I say this on the evidence of this paragraph from the this article that appeared there, an article we associate with the phrase, “torment executioners“:

It appears that the original source of this material was an article that had appeared the year before in an obscure and perhaps iffy outlet called The Professional Medical Journal. From the abstract of the paper in that journal:

The scary thing is that if you google the current editor of the journal where the apparent bit of incompetent plagiarism was published, you’ll see that this is first listed publication:

Just in case you were wondering: no, “Cambridge Scholars Publishing” is not the same as Cambridge University Press.

Kinda creepy that someone who “vehemently” makes a false statement about plagiarism published in his own journal has published a book on “Guidelines for academic researchers.”

We seem to have entered a funhouse-mirror version of academia with entire journals and subfields of fake articles, advisers training new students to enter fake academic careers, and, in a Gresham’s law sort of way, crowding out legitimate teaching and researchers.

Not written by a chatbot

The published article from the above-discussed journal that got this all “torment executioners”started was called “Using Science to Minimize Sleep Deprivation that May Reduce Train Accidents.” It’s two paragraphs long, includes a mislabeled figure that was a stock image of a fly, and has no content.

I pointed that article to a colleague who asked whether it was written by ChatGPT. I said, no, I didn’t think so because it was too badly written to be by a chatbot. I was not joking! Chatbot text is coherent at some level, often following something like the format of the standard five-paragraph high school essay, while this article did not make any sense at all. I think it’s more likely that it was a really bad student paper, maybe something written in desperation in the dwindling hours before the assignment was due, and then they published it in this fake journal. On the other hand, it was published in 2022, and chatbots were not so good back in 2022, so maybe it really is the product of an incompetent chatbot. Or maybe it was put together from plagiarized material, as in the “torment executioners” paper, and we just don’t have the original source to demonstrate it. My guess remains that it was a human-constructed bit of nonsense, but I’m guessing that anyone who would do this sort of thing today would use a chatbot. So in that sense these articles are a precious artifact of the past.

Back to the torment executioners

That apparently plagiarized article was still bugging me. One weird part of the story is that even the originally-published study seems a bit off, with statements such as “42% dentist preferred both standing and sitting position.” Maybe the authors of the “torment executioners” paper purposely picked something from a very obscure source, under the belief that then nobody would catch the copying?

What the authors of the “torment executioners” paper seem to have done is to take material from the paper that had been published earlier in in a different journal and run it through a computer program that changed some of the words, perhaps to make it less easily caught by plagiarism detectors? Here’s the map of transformations:

"acquired" -> "procured"
"vision" -> "perception"
"incidence" -> "effect"
"involvement" -> "association"
"followed" -> "taken after"
"Majority of them" -> "The larger part of the dental practitioner"
"intensity of pain" -> "concentration of torment"

Ha! Now we’re getting somewhere. “Concentration of torment,” indeed.

OK, let’s continue:

"discomfort" -> "inconvenience"
"aching" -> "hurting"
"paracetamol" -> "drugs"
"pain killer" -> "torment executioners"

Bingo! We found it. It’s interesting that this last word was made plural in translation. This suggests that the computer program that did these word swaps also had some sort of grammar and usage checker, so as a side benefit it fixed a few errors in the writing of the original article. The result is to take an already difficult-to-read passage and make it nearly incomprehensible.

But we’re not yet done with this paragraph. We also see:

"agreed to the fact" -> "concurred to the truth"

This is a funny one, because “concurred” is a reasonable synonym for “agreed,” and “truth” is not a bad replacement for “fact,” but when you put it together you get “concurred to the truth,” which doesn’t work here at all.

And more:

"pain" -> "torment level"
"aggravates" -> "bothers"
"repetitive movements" -> "tedious developments"

Whoa! That makes no sense at all. A modern chatbot would do it much better, I guess.

Here are a few more fun ones, still from this same paragraph of Ferguson et al. (2019):

"Conclusions:" -> "To conclude"
"The present study" -> "the display consideration"

“Display consideration”? Huh?

"high prevalence" -> "tall predominance"

This reminded me of Lucius Shepard’s classic story, “Barnacle Bill the Spacer,” which featured a gang called the Strange Magnificence. Maybe the computer program was having some fun here!

"disorders" -> "disarrangement"
"dentist" -> "dental specialists"
"so there should be" -> "in this manner"
"preventing" -> "avoiding"
"delivered" -> "conveyed"
"during" -> "amid"
"undergraduate curriculum" -> "undergrad educational programs"
"should be programmed" -> "ought to be put up"
"explain" -> "clarify"
"prolonged" -> "drawn out"

Finally, “bed posture density” becomes “bed pose density.” I don’t know about this whole “bed posture” thing . . . maybe someone could call up the Dean of Engineering at the University of Nevada and find out what’s up with that.

The whole article is hilarious, not just that paragraph. It’s a fun game, to try to figure out the original source of phrases such as, “indigent body movements” (indigent = poor) and “There are some signs when it comes to musculoskeletal as well” (I confess to be baffled by this one), and, my personal favorite, “Several studies have shown that
overweight children are an actual thing.”

Whaddya say, president and provost of the University of Reno? Are you happy that your dean of engineering is running a journal that publishes a paper like that? “Overweight children are an actual thing.”

Oh, it’s ok, that paper was never read from beginning to end by anybody—authors included.

Actually, this sentence might be my absolute favorite:

Having consolation in their shoes, having vigor in their shoes, and having quality in their shoes come to play within the behavioral design of youthful and talented kids with respect to the footwear they select to wear.

“Having vigor in their shoes” . . . that’s what it’s all about!

There’s “confidential dental clinics”: I guess “confidential” is being used as a “synonym” for private. And this:

Dental practitioners and other wellbeing callings in fact cannot dodge inactive stances for an awfully long time.

Exactly what you’d expect to see in a legitimate journal of the International Supply Chain Technology Journal.

I think the authors of this article are well qualified to teach in the USC medical school. They just need to work in some crazy giraffe facts and they’ll be just fine.

With the existence of chatbots, there will never be a need for this sort of ham-fisted plagiarism. End of an era. Kinda makes me sad.

P.S. As always, we laugh only to avoid crying. I remain furious on behalf of the hardworking students and faculty at UNR, not to mention the taxpayers of the state of Nevada, who are paying for this sort of thing. The phrase “torment executioners” has entered the lexicon.

P.P.S. Regarding the figures at the top of the post: I’ve coauthored papers with students. That’s fine; it’s a way that students can learn. I’m not at all trying to mock the students who made those pictures, if indeed that’s who drew them. I am criticizing whoever thought it was a good idea to publish this, not to mention to include it on professional C.V.’s. As a teacher, when you work with students, you try to help them do their best; you don’t stick your name on their crude drawings, plagiarized work, etc., which can’t be anyone’s best. I feel bad for any students who got sucked into this endeavor and were told that this sort of thing is acceptable work.

P.P.P.S. It looks like there may be yet more plagiarism going on; see here.

P.P.P.P.S. Retraction Watch found more plagiarism, this time on a report for the National Science Foundation.

I’ve been mistaken for a chatbot

… Or not, according to what language is allowed.

At the start of the year I mentioned that I am on a bad roll with AI just now, and the start of that roll began in late November when I received reviews back on a paper. One reviewer sent in a 150 word review saying it was written by chatGPT. The editor echoed, “One reviewer asserted that the work was created with ChatGPT. I don’t know if this is the case, but I did find the writing style unusual ….” What exactly was unusual was not explained.

That was November 20th. By November 22nd my computer shows a file created named ‘tryingtoproveIamnotchatbot,’ which is just a txt where I pasted in the GitHub commits showing progress on the paper. I figured maybe this would prove to the editors that I did not submit any work by chatGPT.

I didn’t. There are many reasons for this. One is I don’t think that I should. Further, I suspect chatGPT is not so good at this (rather specific) subject and between me and my author team, I actually thought we were pretty good at this subject. And I had met with each of the authors to build the paper, its treatise, data and figures. We had a cool new meta-analysis of rootstock x scion experiments and a number of interesting points. Some of the points I might even call exciting, though I am biased. But, no matter, the paper was the product of lots of work and I was initially embarrassed, then gutted, about the reviews.

Once I was less embarrassed I started talking timidly about it. I called Andrew. I told folks in my lab. I got some fun replies. Undergrads in my lab (and others later) thought the review itself may have been written by chatGPT. Someone suggested I rewrite the paper with chatGPT and resubmit. Another that I just write back one line: I’m Bing.

What I took away from this was myriad, but I came up with a couple next steps. I decided this was not a great peer review process that I should reach out to the editor (and, as one co-author suggested, cc the editorial board). And another was to not be so mortified as to not talk about this.

What I took away from these steps were two things:

1) chatGPT could now control my language.

I connected with a senior editor on the journal. No one is a good position here, and the editor and reviewers are volunteering their time in a rapidly changing situation. I feel for them and for me and my co-authors. The editor and I tried to bridge our perspectives. It seems he could not have imagined that I or my co-authors would be so offended. And I could not have imagined that the journal already had a policy of allowing manuscripts to use chatGPT, as long as it was clearly stated.

I was also given some language changes to consider, so I might sound less like chatGPT to reviewers. These included some phrases I wrote in the manuscript (e.g. `the tyranny of terroir’). Huh. So where does that end? Say I start writing so I sound less to the editor and others ‘like chatGPT’ (and I never figured out what that means), then chatGPT digests that and then what? I adapt again? Do I eventually come back around to those phrases once they have rinsed out of the large language model?

2) Editors are shaping the language around chatGPT.

Motivated by a co-author’s suggestion, I wrote a short reflection which recently came out in a careers column. I much appreciate the journal recognizing this as an important topic and that they have editorial guidelines to follow for clear and consistent writing. But I was surprised by the concerns from the subeditors on my language. (I had no idea my language was such a problem!)

This problem was that I wrote: I’ve been mistaken for a chatbot (and similar language). The argument was that I had not been mistaken — my writing had been. The debate that ensued was fascinating. If I had been in a chatroom and this happened, then I could write `I’ve been mistaken for a chatbot’ but since my co-authors and I wrote this up and submitted it to a journal, it was not part of our identities. So I was over-reaching in my complaint. I started to wonder: if I could not say ‘I was mistaken for an AI bot’ — why does the chatbot get ‘to write’? I went down an existential hole, from which I have not fully recovered.

And since then I am still mostly existing there. On the upbeat side, writing the reflection was cathartic and the back and forth with the editors — who I know are just trying to their jobs too — gave me more perspectives and thoughts, however muddled. And my partner recently said to me, “perhaps one day it will be seen as a compliment to be mistaken for a chatbot, just not today!”

Also, since I don’t know an archive that takes such things so I will paste the original unedited version below.

I have just been accused of scientific fraud. It’s not data fraud (which, I guess, is a relief because my lab works hard at data transparency, data sharing and reproducibility). What I have just been accused of is writing fraud. This hurts, because—like many people—I find writing a paper a somewhat painful process.

Like some people, I comfort myself by reading books on how to write—both to be comforted by how much the authors of such books stress that writing is generally slow and difficult, and to find ways to improve my writing. My current writing strategy involves willing myself to write, multiple outlines, then a first draft, followed by much revising. I try to force this approach on my students, even though I know it is not easy, because I think it’s important we try to communicate well.

Imagine my surprise then when I received reviews back that declared a recently submitted paper of mine a chatGPT creation. One reviewer wrote that it was `obviously Chat GPT’ and the handling editor vaguely agreed, saying that they found `the writing style unusual.’ Surprise was just one emotion I had, so was shock, dismay and a flood of confusion and alarm. Given how much work goes into writing a paper, it was quite a hit to be accused of being a chatbot—especially in short order without any evidence, and given the efforts that accompany the writing of almost all my manuscripts.

I hadn’t written a word of the manuscript with chatGPT and I rapidly tried to think through how to prove my case. I could show my commits on GitHub (with commit messages including `finally writing!’ and `Another 25 mins of writing progress!’ that I never thought I would share), I could try to figure out how to compare the writing style of my pre-chatGPT papers on this topic to the current submission, maybe I could ask chatGPT if it thought I it wrote the paper…. But then I realized I would be spending my time trying to prove I am not a chatbot, which seemed a bad outcome to the whole situation. Eventually, like all mature adults, I decided what I most wanted to do was pick up my ball (manuscript) and march off the playground in a small fury. How dare they?

Before I did this, I decided to get some perspectives from others—researchers who work on data fraud, co-authors on the paper and colleagues, and I found most agreed with my alarm. One put it most succinctly to me: `All scientific criticism is admissible, but this is a different matter.’

I realized these reviews captured both something inherently broken about the peer review process and—more importantly to me—about how AI could corrupt science without even trying. We’re paranoid about AI taking over us weak humans and we’re trying to put in structures so it doesn’t. But we’re also trying to develop AI so it helps where it should, and maybe that will be writing parts of papers. Here, chatGPT was not part of my work and yet it had prejudiced the whole process simply by its existential presence in the world. I was at once annoyed at being mistaken for a chatbot and horrified that reviewers and editors were not more outraged at the idea that someone had submitted AI generated text.

So much of science is built on trust and faith in the scientific ethics and integrity of our colleagues. We mostly trust others did not fabricate their data, and I trust people do not (yet) write their papers or grants using large language models without telling me. I wouldn’t accuse someone of data fraud or p-hacking without some evidence, but a reviewer felt it was easy enough to accuse me of writing fraud. Indeed, the reviewer wrote, `It is obviously [a] Chat GPT creation, there is nothing wrong using help ….’ So it seems, perhaps, that they did not see this as a harsh accusation, and the editor thought nothing of passing it along and echoing it, but they had effectively accused me of lying and fraud in deliberately presenting AI generated text as my own. They also felt confident that they could discern my writing from AI—but they couldn’t.

We need to be able to call out fraud and misconduct in science. Currently, the costs to the people who call out data fraud seem too high to me, and the consequences for being caught too low (people should lose tenure for egregious data fraud in my book). But I am worried about a world in which a reviewer can casually declare my work AI-generated, and the editors and journal editor simply shuffle along the review and invite a resubmission if I so choose. It suggests not only a world in which the reviewers and editors have no faith in the scientific integrity of submitting authors—me—but also an acceptance of a world where ethics are negotiable. Such a world seems easy for chatGPT to corrupt without even trying—unless we raise our standards.

Side note: Don’t forget to submit your entry to the International Cherry Blossom Prediction Competition!

“Participants reported being hungrier when they walked into the café (mean = 7.38, SD = 2.20) than when they walked out [mean = 1.53, SD = 2.70, F(1, 75) = 107.68, P < 0.001]."

Jonathan Falk came across this article and writes:

Is there any possible weaker conclusion than “providing caloric information may help some adults with food decisions”?

Is there any possible dataset which would contradict that conclusion?

On one hand, gotta give the authors credit for not hyping or overclaiming. On the other hand, yeah, the statement, “providing caloric information may help some adults with food decisions,” is so weak as to be essentially empty. I wonder whether part of the problem here is the convention that the abstract is supposed to conclude with some general statement, something more than just, “That’s what we found in our data.”

Still and all, this doesn’t reach the level of the classic “Participants reported being hungrier when they walked into the café (mean = 7.38, SD = 2.20) than when they walked out [mean = 1.53, SD = 2.70, F(1, 75) = 107.68, P < 0.001]."

Opposition

Following the recommendation of Elin in comments, I checked out the podcast, If Books Could Kill. It seemed like the kinda thing I might like: 2 guys going back and forth taking apart Gladwell, Freakonomics, David Brooks, Nudge, and other subjects of this blog. It would be really hilarious if they were to take on the copy-paste jobs of chess sages Ray Keene and Chrissy Hesse, but I guess that would be too niche for them . . .

I have mixed feelings about If Books Could Kill. My take is mostly positive—I listen to podcasts while biking so what I’m looking for is a kind of background music, a continuing flow of interesting things that flow smoothly. I’m impressed that they can talk so engagingly, unrehearsed (I assume) for an hour straight. No ums or uhs, they stay on topic . . . they’re pros. Not every podcast I’ve listened to goes so well. Sometimes they go too slow and I get bored.

There’s one thing that they’re missing, though, and that’s opposition. The most recent episode I’ve been listening to provides a good example. They discussed Men are from Mars, Women are from Venus, a book from a few years back that of course I’d heard of, but I’d never read, actually never even opened. I’m not saying that out of pride, it’s just not something that ever crossed my desk.

The podcast went as usual: they lead off with a summary of the book’s theme, then talk about how the book has some reasonable ideas and they want to give it a fair shake, then they get into the problematic passages and get into some questionable aspects of the author’s career.

This all works, but then at some point I kind of resist. It’s not that I think they’re being unfair to the book, exactly. It’s more like . . . they need some opposition. It goes like this. the book says X, they point out problems with X and get to mocking, which is fine—but then I want to push back. I don’t buy everything they’re saying. I think the podcast would be better if they could add a third person, someone who’s also good at the podcast thing and who generally agrees with them, but can push back against their stronger statements. I’m not asking for a debate every week, just someone who can, every once in awhile, say, “Whoa, you’re going too far this time. Yes, you have good points, but the book you’re discussing is not so horrible as you say, at least not right here.”

And yes, the hosts of the podcast—Michael Hobbes and Peter Shamshiri—do point out positive features of the books they’re criticizing; they really do try to do their best to give the authors a fair shake. It’s just that in every episode they’ll get into this rhythm of reinforcing each other to the extent that they’ll miss the point. They need someone to keep them honest, keep them closer to their best selves.

The comment thread does some of that job on this blog.

And, as we’ve discussed over the years, I do find lots of positive things in Gladwell, Freakonomics, David Brooks, Nudge, etc. I think I’d appreciate Men are from Mars etc., but I guess at this point I won’t ever get around to reading it. As to the podcast: I think I’ll continue to be listening to it, because it’s entertaining. But it could be better.

P.S. More here on the value of opposition.

P.P.S. It appears that Hobbes has a track record of making hasty judgments and not considering alternative perspectives. That’s too bad cos otherwise I really enjoy this podcast.

Here’s how to subscribe to our new weekly newsletter:

Just a reminder: we have a new weekly newsletter. We posted on it a couple weeks ago; I’m just giving a reminder here because the goal of the newsletter is to reach people who wouldn’t otherwise go online to read the blog.

Subscribing is free, and then in your inbox each Monday morning you’ll get a list of our scheduled posts for the forthcoming week, along with links to the past week’s posts. Enjoy.

P.S. To subscribe, click on the link and follow the instructions from there.

Storytelling and Scientific Understanding (my talks with Thomas Basbøll at Johns Hopkins on 26 Apr)

Storytelling and Scientific Understanding

Andrew Gelman and Thomas Basbøll

Storytelling is central to science, not just as a tool for broadcasting scientific findings to the outside world, but also as a way that we as scientists understand and evaluate theories. We argue that, for this purpose, a story should be anomalous and immutable; that is, it should be surprising, representing some aspect of reality that is not well explained by existing models of the world, and have details that stand up to scrutiny.

We consider how this idea illuminates some famous stories in social science involving soldiers in the Alps, Chinese boatmen, and trench warfare, and we show how it helps answer literary puzzles such as why Dickens had all those coincidences, why authors are often so surprised by what their characters come up with, and why the best alternative history stories have the feature that, in these stories, our “real world” ends up as the deeper truth. We also discuss connections to chatbots and human reasoning, stylized facts and puzzles in science, and the millionth digit of pi.

At the center our framework is a paradox: learning from anomalies seems to contradict usual principles of science and statistics where we seek representative or unbiased samples. We resolve this paradox by placing learning-within-stories into a hypothetico-deductive (Popperian) framework, in which storytelling is a form of exploration of the implications of a hypothesis. This has direct implications for our work as a statistician and a writing coach.

This post is not really about Aristotle.

I just read this magazine article by Nikhil Krishnan on the philosophy of Aristotle. As a former physics student, I’ve never had anything but disdain for that ancient philosopher, who’s famous for getting just about everything in physics wrong, as well as notoriously claiming that men had more teeth than women (or maybe it was the other way around). Dude just liked to make confident pronouncements. He also thought slavery was cool.

I guess that if Aristotle were around today, he’d be a long-form blogger.

That all said, Krishnan’s article was absolutely wonderful, and I can’t wait to read his forthcoming book. And I say this even though this article didn’t convince me one bit that Aristotle is worth reading! It did convince me that Krishnan is worth reading, which I guess is more relevant right now.

I’m not gonna go through and summarize the article here—it’s short, and you can follow the above link and read it online—; instead, I’ll just point out some thoughts that it inspired in me. It’s the sign of a good work of literature that it makes us reflect.

1. Krishnan discusses AITA. One thing I’d never thought about before is the framing in terms of the asshole, the idea that in any dispute there is exactly one asshole. No room for honest misunderstandings or, from the other direction, a battle between two assholes. This kind of implies that the “asshole” trait is a property of the interaction, so that an unresolvable or unresolved conflict even between two wonderful caring people can inevitably lead to one of them becoming “the asshole.”

2. Krishnan writes:

[Aristotle] says, for instance, that people in politics who identify flourishing with honor can’t be right, for honor “seems to depend more on those who honor than on the one honored.” This has been dubbed the “Coriolanus paradox”: seekers of honor “tend to defeat themselves by making themselves dependent on those to whom they aim to be superior,” as Bernard Williams notes.

I’ve never heard of Bernard Williams, but I have heard of Coriolanus, and what really struck me about that bit is how much it reminded me of a someone I know who absolutely loves honors—he lusts after them, I’d say, to the extent that this lust has warped his life. Nothing wrong with honors as long as they don’t do that distortion. But that point about honor-seekers defeating themselves by making themselves dependent on those to whom they aim to be superior . . . That hits the nail on the head for this guy, his frustration at not being recognized by his perceived inferiors. I’d never thought about that angle before. (Please don’t try to guess this person’s name in the comments; that’s not the point here.)

3. Krishnan writes that his college instructor gave this recommendation: “How to read Aristotle? Slowly. This reminds me of the principle that God is in every leaf of every tree. In short: Sure, read Aristotle very slowly and you’ll learn a lot, in the same way that if you put any bug under a microscope and look at it carefully, or if you sit in front of a painting for a few hours and really force yourself to stare, or if you hold a long conversation with anyone, you’ll learn a lot.

4. Krishnan writes:

There is such a thing as the difference between right and wrong. But reliably telling them apart takes experience, the company of wise friends, and the good luck of having been well brought up.

Well put. It also helps not to be hungry or in pain.

Our new Substack newsletter: The Future of Statistical Modeling!

Some people told me that it would be easier to follow the blog if it were available in newsletter form. So we set up a Substack newsletter called The Future of Statistical Modeling.

Each week will give a list of the posts scheduled for the upcoming week at our blog, along with links to the previous week’s posts.

To make the newsletter more appealing, I’ll also sometimes post other stuff there, such as entire future posts, so then if you subscribe to the newsletter you get access to some fun stuff that you otherwise might not see for months.

The Substack newsletter is free, just a convenience for you, and a way for us to broaden our reach.

God is in every leaf of every tree—comic book movies edition.

Mark Evanier writes:

Martin Scorsese has directed some of the best movies ever made and most of them convey some powerful message with skill and depth. So it’s odd that when he complains about “comic book movies” and says they’re a danger to the whole concept of cinema, I have no idea what the f-word he’s saying. . . .

Mr. Scorsese is acting like “comic book movies” are some new thing. Just to take a some-time-ago decade at random, the highest grossing movie of 1980 was Star Wars: Episode V — The Empire Strikes Back. The highest-grossing movie of 1981 was Superman II. The highest of 1982 was E.T. the Extra-Terrestrial and the highest-grossing movies of the following years were Star Wars: Episode VI — Return of the Jedi, Ghostbusters, Back to the Future, Top Gun, Beverly Hills Cop II, Who Framed Roger Rabbit and Batman.

I dunno about you but I’d call most of those “comic book movies.” And now here we have Scorsese saying of the current flock, “The danger there is what it’s doing to our culture…because there are going to be generations now that think movies are only those — that’s what movies are.” . . .

This seems like a statistical problem, and I imagine some people have studied this more carefully. Evanier seems to be arguing that comic book movies are no bigger of a thing now than they were forty years ago. There must be some systematic analysis of movie genres over time that could address this question.

Discover Instagram Stories with These Tools

Explore Instagram stories differently with these services:

Instagram Story Viewer – Dive into a vibrant world with Mollygram’s unique interface.
Insta Stories – Enhance your experience with innovative tools and captivating content.
Stories IG – Your gateway to a plethora of Instagram stories and trends.

Progress in 2023

Published:

Unpublished:

Enjoy.