Update on effects of economy on political attitudes and behavior?

The other day we discussed some unresolved issues on how the state of the economy can affect voters’ attitudes. Here’s part of the story, but only part of it:

Individuals can feel one way even as averages are going in a different direction . . . In a recession, you see comments about people losing everything; here, the comments are more along the lines of, “It’s supposed to be an economic boom, but we’re still just getting by.” But, sure, if there’s an average growth of 2%, say, then (a) 2% isn’t that much, especially if you have a child or you just bought a new house or car, and (b) not everybody’s gonna be at that +2%: this is the average and roughly half the people doing worse than that. The point is that most people are under economic constraints, and there are all sorts of things that will make people feel these constraints–including things like spending more money, which from an economic consumption standpoint is a plus, but also means you have less cash on hand. . . .

People have this idea that a positive economy would imply that their economic constraints will go away–but even if there really is a 2% growth, that’s still only 2%, and you can easily see that 2% disappear cos you spent it on something. From the economist’s perspective, if you just spent $2000 on something nice, that’s an economic plus for you, but from the consumer’s point of view, spending that $2000 took away their financial cushion. The whole thing is confusing . . .

My point is not that people “should” be feeling one way or another, just that the link between economic conditions and economic perception at the individual level is not at all as direct as one might imagine based on general discussions of the effects of the economy and politics.

This makes me think that the view of the economy from the news media is important, as the media report hard numbers which can be compared from election to election.

I summarized:

My current take on the economy-affecting-elections thing is that, in earlier decades, economic statistics reported in the news media served as a baseline or calibration point which individual voters could then adjust based on their personal experiences. Without the calibration, the connection between the economy and political behavior is unmoored.

Another way to put it is that there is little difficulty in seeing the direct effect of economic changes on aggregate political attitudes. In general, if the economy is improving under some reasonable measure, then more people will be doing better and fewer people will be doing worse, and that should show up in average attitudes. It is, in political science jargon, a “valence” issue. The more difficult question is: How do economic differences between election years map to different political attitudes in these elections? As noted above, my best answer is that, in the past, people could use well-publicized economic statistics such as the unemployment and inflation rates to align their expectations across elections. But now the news media landscape is so fractured that this sort of aggregate calibration is no longer possible.

I was discussing this with a couple of political scientists today and they pointed me to the work of Mark Kayser, Michael Peress, and their collaborators. Here’s a link to some of their papers.

And here’s a paper by Bob Erikson from 2004 that I must have read at the time. It begins:

Many of the findings regarding economic voting derive from the micro-level analyses of survey data, in which respondents’ survey evaluations of the economy are shown to predict the vote. This paper investigates the causal nature of this relationship and argues that cross-sectional consistency between economic evaluations and vote choice is mainly if not entirely due to vote choice influencing the survey response. Moreover, the evidence suggest that apart from this endogenously induced partisan bias, almost all of the cross-sectional variation in survey evaluations of the economy is random noise rather than actual beliefs about economic conditions In surveys, the mean evaluations reflect the economic signal that predicts the aggregate vote. Following Kramer (1983), economic voting is best studied at the macro-level rather than the micro-level.

Lots to chew on here. Most of the time when we talk about the economy and voting, these issues never even come up. But they should.

“Deciphering the Neighborhood Atlas Area Deprivation Index: The Consequences of Not Standardizing”

Steve Petterson writes:

Given your interest/concerns about the Area Deprivation Index (ADI), I thought you would be interested in my paper that has just been accepted by Health Affairs Scholar. The supplemental materials include much more details about the problems with the ADI.

The main results have been replicated by a team at Stanford (affiliated with Bob Phillips) and Northwestern (affiliated with Bernard Black). Through Bob, CMS and ASPE are aware of my findings (that build on your findings).

So you know, I was the analyst behind the Social Deprivation Index (SDI) that you used in your maps for your blog.

From Petterson’s new article:

The Area Deprivation Index is a widely used measure recently selected for several Federal payment models that adjust payments based on where beneficiaries live. A recent debate in Health Affairs focuses on seemingly implausible ADI rankings in major cities and across New York. At the root of the issue is the importance of standardization of measures prior to calculating index scores. . . . The main finding is that without standardization, the ADI is reducible to a weighted average of just two measures—income and home values, certainly not the advertised multidimensional measure. . . .

This last bit resonates with me, when an index is created that nominally uses all sorts of information but actually is based on very little. Similar issues arose with the notorious Electoral Integrity Project (where the data on North Korea were based on respondents from something like 3 people) and that Philadelphia crime analysis (where the trends in that city were compared to a weighted average from only 3 other cities) and various other indexes and measures floating around.

From my perspective, the big issue here with the Area Deprivation Index is not standardization so much as the problem of people uncritically using a “Deprivation Index” without thinking hard about what goes into it.

Why was “happiness” such a hot social science topic 20 years ago but not so much now?

I happened to come across this post from 2006, Immigration and relative happiness, and it reminded me that “happiness” was really big in social science, twenty years ago. We had a lot of posts on the topic–not that coverage on this blog is the same as importance within social science, but I will often end up posting on things that are being talked about–if you search on “happiness” you get 4 posts from 2006, 3 from 2007, 3 from 2010, 1 from 2011, 2 from 2012, 2 from 2014, 2 from 2016, and three since then. Which reflects a gradual decline of interest in the topic, but it’s really more than that. Back in the 2005 zone, we were thinking about happiness a lot, whereas now it’s just one of many social and behavioral research topics.

That’s an internal-to-science take. We can also give an external, societal take: 20 years ago we were still in the long post-1990s economic boom, and there was a political consensus and a feeling that many of the world’s major problems were on the way to being solved. When I say “political consensus,” I don’t want to overstate the point: in the U.S. there were major political conflicts at the time (Iraq War, Social Security, a continuing increase in partisan polarization); still, it was nothing like how things are going on in the world now, both economically and in international relations. The point is just that, from the perspective of the early 2000s, it seemed that modern society was on the way to solving the big material and social problems, so it made sense to look at happiness, in a sort of quantitative-social-science update of the work of Galbraith and others in the 1950s. Now here we are in the 2020s, and people are running around trying to avoid getting bombed, or arrested, or fired; “happiness” seems like less of an immediate priority.

Maybe there’s a way to combine these perspectives to ask why happiness flared and then faded as a hot research topic. An easy answer would be that the problems have all been solved, but I don’t think so. It’s more like it faded away . . . it was more of a dead end. Or, not a dead end, exactly, more that not all its hopes were satisfied. I guess this arises in social science more generally, that an idea will flare in popularity, there will be lots of work in the area, some progress is made, some open questions remain, and then it becomes part of our background knowledge, just one more set of unresolved questions in our understanding of the world.

Happiness research still gets hyped–no, I don’t believe the claim, attributed to “a landmark paper in 2010,” that “a rise in income increased people’s well-being, but only to a ceiling of $75,000”–but it’s no longer the next new thing.

I’m not trying to dish on happiness studies in particular; it’s just an example of a social-science fad that was in the right place at the right time.

Calling something a “fad” is not to label it as useless–hula hoops are still around and providing people with hours of fun!–; it’s just interesting how an idea in social science can come out of nowhere, become huge, and then fade away. Not all the way to zero, and it didn’t start at zero either–the General Social Survey has been asking about happiness since 1972–but a fad nonetheless.

On the other hand, there’s this:

So maybe what’s happening is that happiness research continues to be a big deal, and I’ve just been implicitly noticing the derivative of its popularity rather than the absolute level. When happiness research was on the way up, it caught my attention. Now that it’s a mature subfield, it’s not so exciting. I don’t know. As with many social-science ideas, happiness research burst onto the scene with the promise that it would solve all sorts of problems, then it settled down to just become one more thing.

“Political Prediction and the Wisdom of Crowds”: Evaluating an election forecast over time by comparing to betting odds over time

Rajiv Sethi, Julie Seager, et al. write:

We evaluate the relative forecasting performance of three statistical models and a prediction market for several outcomes decided during the November 2024 elections in the United States: the winner of the presidency, the popular vote, fifteen competitive states in the Electoral College, eleven Senate races, and thirteen House races. We argue that conventional measures of predictive accuracy such as the average daily Brier score reward modeling flaws that result in predicable reversals, as long as such movements are in a direction that is aligned with the eventual outcome. Instead, we adopt a test based on the idea that the strength of a model can be measured by the profitability of a trader who believes its forecasts and bets on the market based on this belief. . . . We find that all models failed to beat the market in the headline contract but some did so convincingly in contracts referencing less visible races.

They continue:

The ability of prediction markets to absorb novel sources of information and respond rapidly to unfolding and unprecedented events is a strength relative to statistical models, which are built and calibrated based on an assumption that the past will remain a good guide to the future. But markets also have weaknesses relative to models, being prone to excess volatility and occasionally vulnerable to price manipulation. The question of whether markets or models are more accurate on average is therefore an empirical one, and cannot be answered based on logical reasoning alone. In this paper, we examine this empirical question using data from three statistical models—FiveThirtyEight [Elliott Morris], the Economist [Dan Rosenheck, Ben Goodrich, Geonhee Han, and me], and Silver Bulletin [Nate Silver]—and the Polymarket exchange, which was the only venue on which contracts for a broad range of electoral outcomes were listed for the entire period from early August until election day on November 5.

I’m pretty sure that if the Economist had run with Ben Goodrich’s ideas when putting together their presidential election forecast (see section A.2 of this paper), we would’ve performed better in Sethi et al.’s evaluation.

This is not to say that anyone but Ben deserves credit for that (hypothetically) better performance; we ultimately made the decision to go with the simpler model. My point here is only the familiar one that, those long juicy time series notwithstanding, ultimately this is only a sample of size 1, first because this is is all based on a single national election and second because the outcome of the evaluation can depend so much on a single choice we made during our modeling and implementation process.

The idea of evaluating a forecast by comparing it to market prices is interesting, and it sends my thoughts in two opposite directions:

1. Given that a market exists, it makes sense to evaluate any outside information (in this case, public forecasts) based on what they add in predictive power to the forecast. Richard Clarida explains this idea in chapter 9 of our book, A Quantitative Tour of the Social Sciences.

2. Conversely, market prices are presumably influenced by public forecasts and, beyond that, new polling information shifts the markets and forecasts together. A few days before the election we discussed an aberrant poll from Iowa, which shifted both betting markets and forecasts.

Putting these perspectives together, it could make sense not to just have markets and forecasts compete but to ask where will markets do better and where will forecasts do better.

In general I’d expect markets to do better with one-of-a-kind information and forecasts to do better with numerical data that is part of an ongoing process.

For example, it was not clear in a forecast how to model information from new voter registrations, data from neighbor polls, or perceptions vs. reality of inflation. But these are factors that markets can incorporate in some ways.

Incorporating that Iowa poll, though, is the sort of thing that a forecast can do very well. Bayesian inference and partial pooling (across states and regions, over times, and among poll organizations) does not come naturally to people, but a model-based forecast can just crunch and include that new information easily. It won’t be perfect, but accounting for new polls is in the wheelhouse of our election forecasting models. This suggests that if you’re betting, you might want to go with market odds but then use the shift in the public forecasts to get a sense of how much your predictions should change given this new piece of information.

On the statement, “American academia is entering a period of even more uncertainty”

The above sentence was written by sociologist Philip Cohen. I guess his statement about American academia is literally true, but the “even more” part seems misleading to me. My take on American academia is that it has been one of the economically cosseted islands of the American economy (along with the health care and police/military/security industries) within the sea of uncertainty, at-will employment, etc.

Academia, health, and security are three areas of the economy that have had atypically low uncertainty over the past few decades: they’ve been close to recession-proof and, yes, there is always belt-tightening but not a lot of people actually getting fired. One exception to this is the replacement of full-time teaching positions with adjuncts, but that seems different from the issues discussed in your above post.

I guess what I’m saying here is . . . ummm, I’m not saying that everyone in academia has it easy, just because I have it easy, having been lucky enough to step on the escalator at the right time (if maybe not as lucky as the lazybones discussed here). Rather, given all the uncertainty in the economy in the U.S. and the world during the past twenty years, I wouldn’t say it would hard to believe this would come to academia also. Especially given that there have been direct political efforts to attack academia. Even beyond that, though, it’s hard for any institution to hold out against the tide.

A relevant analogy here might be the police. When people talk about cutting the budget for the police or reducing the autonomy of police officers, police departments fight back, often pretty loudly. The police are like the university in that both are valued because: (1) they provide a necessary function for society, a function for which there is always a demand for more, and (2) they are highly politicized and active in politics (academia on the left and police on the right). The health-care industry is different. It satisfies property #1 but not property #2: the health-care industry is not associated with either side politically. Although that seems to be changing, with doctors and nurses moving toward the Democrats and the Republican party working pretty hard to antagonize them.

Market and antimarket: The story of the Berkeley Electronic Press

William Davies writes:

The​ words ‘market’ and ‘capitalism’ are frequently used as if they were synonymous. Especially where someone is defending the ‘free market’, it is generally understood that they are also making an argument for ‘capitalism’. Yet the two terms can also denote very different sets of institutions and logics. According to the taxonomy developed by the economic historian Fernand Braudel, they may even be opposed to each other.

In Braudel’s analogy, long phases of economic history are layered one on top of another like the storeys of a house. At the bottom is ‘material life’, an opaque world of basic consumption, production and reproduction. Above this sits ‘economic life’, the world of markets, in which people encounter one another as equals in relations of exchange, but also as potential competitors. Markets are characterised by transparency: prices are public, and all relevant activity is visible to everyone. And because of competition, profits are minimal, little more than a ‘wage’ for the seller. Sitting on top of ‘economic life’ is ‘capitalism’. This, as Braudel sees it, is the zone of the ‘antimarket’: a world of opacity, monopoly, concentration of power and wealth, and the kinds of exceptional profit that can be achieved only by escaping the norms of ‘economic life’. Market traders engage with one another at a designated time and place, abiding by shared rules (think of a town square on market day); capitalists exploit their unrivalled control over time and space in order to impose their rules on everyone else (think of Wall Street). . . . Capitalism, in Braudel’s words, is ‘where the great predators roam and the law of the jungle operates’.

Interesting. Put this way, it all seems obvious, and I guess this must all be well known in economics, but I’ve never thought of it that way.

There can be no sharp distinction between “economic life” and “capitalism” or between “the market” and “the law of the jungle”—even the largest companies have to compete in some way in order to make payroll—but, yeah, the idea of capitalism as an antimarket, that makes sense. I was already familiar with the idea that firms are, in the words of Dan Davies, “islands of central planning linked by bridges of price signals,” but I hadn’t thought about this as a sort of definition of capitalism. (Dan Davies is, I assume, not directly related to William Davies who was quoted above.)

Here’s an example. Later in his article, Willam Davies writes:

Academic publishing, for example, is one of the most egregious rent-grabs around. Scholars, editors and reviewers work for free, so that large copyright-protected conglomerates can charge libraries several thousand pounds a year for digital access to journals they can’t do without. The profit margins of the big scientific publishers run as high as 40 per cent, enough to make the boss of Shell blush. Hence the enthusiasm for projects such as the not-for-profit Open Library of Humanities, set up by Birkbeck academics in 2013, which now publishes 33 open access journals per year. When it’s capitalism that’s the problem, and not markets, the only alternative is post-capitalism.

There’s some truth to that. I use Arxiv and my home page and, for that matter, this blog, to communicate scientific ideas directly without paying rent to Elsevier etc. I also publish articles in journals and I publish books with for-profit and non-profit publishers, so you could say I operate in some sort of mixed economy of publication.

But then I can tell you a story that puts us back into the capitalism-as-antimarket situation.

About 25 years ago, my friend Aaron Edlin started a set of journals which he called the Berkeley Electronic Press. The clever idea was that his journals would be online-only and freely accessible to all and, if you published a paper for one of his journals, you agreed to review some number of submissions. Also, the journals were arranged in four different tiers: you’d submit an article, and the editors would decide based on the reviews which tier your article would go in. Aaron’s an economist, and these innovations seemed like great resolutions of the problem of hassling reviewers and the problem of deciding what to publish. I published a paper in one of Aaron’s journals, back in the day, and it all went very smoothly. My article ended up in the third-tier “Contributions” category, and it’s only been cited 30 times, but, hey, what are you gonna do? The experience was much better than the usual story with academic journals where they act like they’re doing you some sort of huge favor for publishing your article. It was all very efficient and low-key. Aaron got his friends to edit some of these journals. He asked me too, but I was too busy.

In any case, their original business model didn’t seem to have worked out. Now they’ve just become one more crappy series of paywalled journals. I went to the Berkeley Electronic Press website and saw this: “In 2011, bepress chose to exit the commercial subscription-based journal business in order to focus all of our energies on our open access services; this meant selling the 60+ bepress journals which we had published for the last decade.”

I’m thinking that the mistake was to have 60+ journals in the first place. How can you possibly keep track of all of that? Maybe they should’ve just capped their number of journals at 10, and then it could all have worked out, I dunno. I don’t fault Aaron for this—I’ve started all sorts of projects that didn’t continue the way I’d originally planned, and nothing lasts forever in any case. He did keep those journals going for a few years, which isn’t nothing.

The relevance to the main theme of this post is that the Berkeley Electornic Press started out as some sort of cooperative or possibly market-based system but then got sucked into the capitalist antimarket.

We think of capitalism = market and cooperative being the opposite of a market, but in this case the connections go differently. The cooperative and market versions of the Berkeley Electronic Press are similar in that they involve some sort of open exchange between independent agents, whereas the capitalist antimarket version is all happening behind many layers of obscurity.

I recognize that none of this is new to economists. This particular perspective was new to me, though, hence this post.

Anticipated good news if the economy goes downhill

There’s some concern about an upcoming recession caused by economic shocks, including tariffs, reduced tourism to the U.S., reduced hiring because of economic uncertainty, etc.

There’s always a silver lining. Some things we might expect to follow from an economic downturn:

– Less energy consumption. Fewer tourists means a decrease in airline flights, car rentals, etc. And if China is selling us less stuff, they may be using burning less coal there. All in all, this is good for the environment. A global recession could lead to people eating less meat, or at least slowing the rate of increase in meat consumption. And if people are pessimistic about the future, they might have fewer kids, which would save lots of energy use.

– Lower gas prices. I don’t really care so much about this because my bike doesn’t use gasoline, but gas prices are famously influential in people’s perceptions of inflation. Less travel would imply lower demand for oil, and the price should go down. Indeed, if there’s enough of a depression, just about all prices could drop.

– Some improvement in quality of life. Maybe if your kids have fewer plastic toys, they’ll play more creatively with the toys they already have. Less travel = more quality time on the couch. Etc. The standard economic theory would suggest that taking things away from people would decrease their utility, but we know that the standard theory isn’t always correct.

OK, that’s just three things. But I’m no economist. I assume the experts in the audience could add a few more.

Intergenerational socioeconomic mobility over time for different ethnic groups

Elisa Jácome, Ilyana Kuziemko, Suresh Naidu write:

We estimate long-run trends in intergenerational relative mobility for representative samples of the U.S.-born population. Harmonizing all surveys that include father’s occupation and own family income, we develop a mobility measure that allows for the inclusion of non-whites and women for the 1910s–1970s birth cohorts. We show that mobility increases between the 1910s and 1940s cohorts and that the decline of Black-white income gaps explains about half of this rise. We also find that excluding Black Americans, particularly women, considerably overstates the level of mobility for twentieth-century birth cohorts while simultaneously understating its increase between the 1910s and 1940s.

This is an interesting paper both for its substantive content and for its use of data. The authors also prepared a teaching supplement that walks through the analysis in detail. Good job!

P.S. For both of the displays above, I think a grid of small plots would work better. There’s this problem where researchers seem to think they need to cram as much as possible into one graph, but they you end up with all these symbols and colors, and the reader needs to go back and forth between the graph and the legend . . . it’s kind of a mess. On the plus side, it’s good to see any scatterplots at all in an empirical paper!

Buy your Tesla at closing time: For 15 years, these stocks have been mildly fluctuating during the day and shooting up overnight.

Bruce Knuteson shares the above plot and writes:

Look at the strikingly suspicious overnight and intraday returns to Tesla’s stock noted by the Financial Times (cf. my rejoinder) and Forbes.

This suspicious return pattern in TSLA is easy to reproduce [data]. Nobody has articulated a plausible innocuous explanation for it. The only explanation that fits the facts is the market manipulation we have discussed. That has been the only explanation for nearly a decade now. Tesla’s stock is the source of much of Elon Musk’s wealth. The public still doesn’t know about this suspicious return pattern in the source of much of Elon Musk’s wealth because nobody has told them.

This has come up before, and it’s not just Tesla. Here’s a a graph that Knuteson sent me a couple years ago as evidence of market manipulation:

As I said at the time, I absolutely have no idea about this sort of thing. I ran into someone yesterday who used to work in financial markets and he was saying there was some class of high-volume traders who never like to hold onto assets overnight. And various theories came up in the comments to our previous post.

It’s an interesting statistical puzzles, in part because from my perspective it’s essentially impossible to understand without lots of subject-matter knowledge. And, unlike some other statistical puzzles (for example, the one discussed here) it’s a live issue.

One question that came up before is how this pattern looks for other financial assets. Knuteson shows a bunch of the relevant plots in this paper, for example:

Again, I entirely defer to others in trying to understand this one.

P.S. Knuteson also has this comment regarding the graphical displays:

Funniest spam of the week (an offer of $200,000)

This came in the email yesterday:

From: **
Subject: Journal Purchase Proposal – ECON JOURNAL WATCH
Date: April 11, 2025 at 03:47:32 EDT
To: [email protected]

Respected Editor,
ECON JOURNAL WATCH
I am Alex, Founding Director of ** Press. We are an international platform with extensive experience in academic journal management.

We would like to propose negotiations for a $200,000 acquisition of your journal, with a commitment to maintaining its reputation and confidentiality. I have attached a proposal presentation with more details and would be glad to address any questions. If you’re interested, we can arrange a Zoom meeting at your convenience to discuss further.

Looking forward to your thoughts.

Best regards,
Alex
Assistant Editor
**

The message also came with a Word document that no way I’m gonna open.

So many funny things about this one, maybe the funniest being that “Alex” is identified both as the “Founding Director” of the press and also as its “Assistant Editor.” The Founding Director should be able to grab a higher title than Assistant, no? Also funny how they have “extensive experience in academic journal management” but they can’t bring themselves to mention any academic journals they actually manage. Vixra, perhaps? Or maybe Psychological Science in its 2010-2015 boom years?

As for the $200,000 . . . doesn’t “Alex” know that scientific citations are worth $100,000 each? Econ Journal Watch is small-time but it’s still gotta be worth more than 2 citations, no?

Here’s my counter-offer, “Alex.” Put $20 million an the table and we can talk. But here’s something to sweeten the offer for you: if I go for the deal, I’ll throw in JASA and JMLR for you too. I don’t have the authority to sell you those journals, but then again I don’t have the authority to sell Econ Journal Watch either. And I was on the editorial board of JASA once.

I’d offer to sell Berkeley Electronic Press, but that won’t work, as it’s already been sold to the highest bidder. I bet Aaron got more than a piddling $200,000 out of the deal. Hell, Alan Krueger got $100,000 to coauthor just one article!

Maybe I should be less amused than insulted by the offer of $200,000 for a single journal.

On the other hand, $200,000 is real money: I could use it to attend 11+ conferences featuring some mixture of active and washed-up business executives, academics, politicians, and hangers-on. Or, more to the point, it’ll buy me 139,130 Jamaican beef patties (I’m rounding down here, cos what can you do with 0.4 of a beef patty).

OK, here’s the deal, “Alex,” if you’re reading this email. Mail me $200–that’s right, not $200 thousand, just $200–in unmarked bills, then I’ll talk with the other people on the Econ Journal Watch editorial board and see if they’re interested in your offer. If it’s a yes, then we have a deal. But the $200 is a nonrefundable deposit, a fair payment for my time. Take it or leave it.

Heck, send me $400 and I’ll pass the offer on to the JASA editorial board as well.

P.S. We laugh, but, yes, these people are evil. They degrade our public online spaces in the same way that muggers degrade public physical spaces.

He can’t pay his bills but he has a second home . . . Whassup with that?

I just read Life in the Middle Ages, a reflective set of memoir-essays by James Atlas, who we discussed a few years ago in the context of his book about biography-writing, The Shadow in the Garden. Life in the Middle Ages came out twenty years ago, when Atlas was in his mid-fifties. The book is suffused with gentle regrets about his life and an awareness of how little time was left to him—which all makes me kinda sad since I’m almost 60!—actually will be 60 once this post appears—and, yeah, I think all the time about the dwindling number of years ahead of us. Atlas himself lived only to 70; to his credit, he completed his excellent Shadow in the Garden book in his late sixties. So he lived a full life, professionally speaking, even though in Life in the Middle Ages he expresses many regrets about his career setbacks.

I think the real problem with Atlas’s career has nothing to do with him; he just happened to enter a field—general-interest writing about literature—that was declining. Less people interested in literature, the gradual collapse of the economic model for newspapers and magazines, ease of mechanical reproduction so less need for live bodies doing the writing . . . put that all together, and Atlas in his career basically joined a decades-long game of musical chairs. When the chairs keep being removed, it’s natural to blame yourself, but it’s more just that he was caught in the wrong game. Which makes me sad because I love reading about literature. I think it would be cool to have been James Atlas, although I guess not in the later years when his audience declined, along with the rest of the audience for the sort of thing he was writing.

The other funny thing about Atlas’s book is that he talks about being broke—not poor, as he and his family seem to have all the possessions they might want, except for a fancy car (I don’t get why he wants a BMW or Jaguar, I guess it’s some sort of boomer thing?), but broke enough that they never have quite enough money to pay the bills, they’re always on the edge with credit card debt, he calls himself “lower upper middle class”—but then he keeps talking about the summer home they own in Vermont.

I can understand people being rich enough to have two homes, and I can understand people being broke enough to struggle to pay their bills every month—but it’s hard for me to picture both of these at once!

But then I had two thoughts which made it all clear to me:

1. The United States of America is kinda like James Atlas: We’re the richest country in the history of forever, we can have pretty much whatever we want (except that not everybody gets a BMW or Jaguar), and our national debt keeps going up, up, up. So, yeah, it’s possible to live the good life and have that second home, even though you can’t really pay your bills.

2. I’ve always been fortunate enough to have enough money to pay for whatever I want—I mean, not always, there’s a reason I ask for research grants to pay salaries for postdocs etc., but at a personal level, I can afford unlimited celery and Jamaican beef patties, pay to get my flat tires changed as needed, fly to faraway places, etc.—but I’m in an Atlasian mixture of debt and riches when it comes to time:

I have tons of free time, as is evidenced by (a) that I’m spending a half hour writing this blog post and (b) that I spent a couple hours earlier this week reading Atlas’s book, for no other reason than I felt like it. And this wasn’t even the only pleasure book I read over the Thanksgiving weekend. But I’m also in a continual time debt, a veritable treadmill of time commitments. I’m in the middle of writing 5 books and a few dozen research articles, and I keep taking on new projects. No way I can do all of these! But, as with Atlas and his finances, somehow I keep going.

So, from that point of view, my comfortable finances are an anomaly. Debt financing is the usual way of the world.

On that claim about “How does energy impact economic growth”

Hanno Böck writes:

I recently saw a graphic coming from here posted multiple times on social media that I found quite misleading in its data representation.

There exist some variations of it, but they all share the same problem.

The most notable issue is that the graphic uses logarithmic scales on both axes. This has the effect of squeezing everything together on the upper right end and visually creates a much stronger correlation than there actually is.

Another thing to note, and this is where I’d be curious what you think about it, is that it gives an R^2 value of 0.8 at the bottom. First of all, R^2 is, as far as I can tell, not something that can be easily and intuitively understood (it seems a simple r coefficient would be more appropriate). But that’s not the main problem. The value is, as far as I can tell, simply wrong.

When I try to calculate R^2 for that data, I get 0.43. It appears that what was done here was to calculate the R^2 value over the log values of the input data. (If I do that, I get 0.81.)

In case you want to play with the data, here’s some quick python I wrote to create similar graphs with a non-log scale, and the relevant data sources from the world bank and EIA.

My reply:

I don’t think the logarithmic scale is a problem, and it’s fine to compute the R-squared of log-scaled data. In any case, the scatterplot tells the story; I don’t thin R-squared adds anything here.

I clicked through to the source, and the real problem seems to be their title, “How does energy impact economic growth.” The data they show are cross-sectional with no such causal implication.

Bock responded:

I’m surprised that you don’t see a problem in the log scale. I believe this is the main issue with this graph. (As a rule of thumb, I’d say log scales should rarely be used in public communication at all, as they are not easy to understand intuitively. If they are used, there needs to be a good explanation, which I don’t see here.)

To maybe illustrate this more clearly, I have attached linear and log-scaled versions of the data. To me, they tell a different story. The log version implies that there is a general, strong correlation between electricity consumption and per capita gdp. But the actual data tells me that the correlation is only present below a certain threshold, and above that, we have extreme differences of energy use in countries with very similar gdp levels. (E.g. quite rich countries like Denmark/Switzerland with a very low electricity use.)

Regarding your point about causal inference, that’s probably a valid point as well, but not really what I’m trying to get at here. The reason is that I don’t think that blog post got a lot of attention, but the graphic is shared very widely.

Böck posted a longer discussion here. Setting aside the above-discussed issues with the log scale and R-squared, the rest of his post has interesting economics content.

We have here two papers that are critical of the culture of academic economics. Where can they be published?

The first paper is by Haynes Goddard. It’s called Promoting Intellectual Honesty in Higher Education: Addressing Cognitive Biases, Political Discourse, and Court Decisions, and it follows up on a discussion we had in this space in 2021. Back then, lots of people were stuck at home with nothing better to do than read blogs, so that little post got 96 comments.

Goddard sent me this paper and asked if I had any idea where it could be published.

My take on this article is that it would be perfect for a hypothetical journal of economics or social science criticism. There’s an existing journal, Econ Journal Watch, and I’m on its editorial board, but Goddard’s article wouldn’t quite fit there, partly because its political orientation doesn’t match that of the journal and partly because Econ Journal Watch focuses on critiques of particular papers in the field rather than critiques of the larger ideology.

And this made me think of another paper that would fit well in the hypothetical journal, Economics Criticism. It’s a paper I recently wrote, When fiction is presented as real: The case of the burly boatmen, which begins:

From Adam Smith’s pin factory and Rousseau’s state of nature onward, parables—openly fictional or speculative stories constructed to dramatize a theoretical point—have been central to economics and other social sciences. Sometimes, though, a parable can escape its cage and be presented as a real event that happened in the wild, in which case what was originally intended as an explanation can be presented as empirical evidence.

Attributing fiction as fact is dangerous for two reasons. First, we can fool ourselves and others into thinking the evidence for a proposition is stronger than it actually is. Second, stories are not elaborated at random; when they are altered in the retelling it is natural for authors to modify their details in ways that fit the theories they want to prove. This is fine when a parable is presented as such; the problem arises when the story’s original foundation and later trajectory are obscured, leading authors and readers to think of it as empirical confirmation.

This is a story of how a particular story spread and mutate across the economics literature over several decades, becoming more fictional at each step while being presented more and more as factual. It should be a warning to authors and readers of research papers to beware of stories that fit a theory all too well and to check through the trail of references before retelling–or, worse, elaborating upon–stories that are presented as real.

The two papers are related in some way.

I don’t agree with everything in Goddard’s paper, and I don’t expect all of you to agree with everything in mine. That’s ok–the point is not to enforce universal agreement, indeed, a key point of both papers is to warn against ideological conformity.

Unfortunately the journal Economics Criticism does not exist. If any of you have suggestions of where Goddard and I can publish our papers, please let me know. Extra points if it’s a journal where I’ve not yet published.

How is an American research university funded?

[Update: I hadn’t realized that Johns Hopkins was such an outlier when writing this. I followed up with some stats from other universities in this comment responding to Andrew.]

If you want some insight into why American academics are losing their mind, it helps to understand how American research is funded. The U.S. classifies universities into tiers by how much research the do (a “college” is like a university but doesn’t have graduate programs). The R1 universities are the research universities you’ve heard of like Stanford and Johns Hopkins and Columbia and University of Michigan. What you may not realize is that a huge chunk of their operating budget is derived from grants.

Here’s a link to the Johns Hopkins Annual Financial Report, 2023. I chose Hopkins because it’s been coming up with a lot of our job candidates. The breakdown of the budget is on pages 4 and 5. The bottom line is that the university, a non-profit organization, made a $414M “profit.” But let’s break down where the money is coming in and out.

Page 4: Operating Revenue and Operating Expenses

First note that the numbers are in thousands of U.S. dollars. The total income is $7.8B. Let’s break that down.

“Tuition” brings in only $830M. Universities don’t collect full tuition—they give out a lot of financial aid. And they just don’t have that many students.

“Grants, contracts, etc.” bring in $2.3B (excluding the APL).

Applied Physics Laboratory” brings in another $2.3B (yes, same number). They mince words in the intro, but it’s basically a defense contractor if you read the bullet items, whose mission is, quite frankly, frightening (click through for details of their “warfighting” support). The U.S. sadly entangles its defense budget and university budgets.

“Contributions” from individuals, foundations and corporations make up another $210M. I wonder how the great whale Bloomberg’s $1B donation is accounted—I’m guessing this is lottery-style reporting of $1B, not net-present value of $1B.

“Net assets released from restrictions” of $110M, which means previously restricted donations become available to be spent.

“Clinical services,” i.e., the Hopkins hospital system, brought in $890M net!!! More than tuition, but less than grants.

“Reimbursements from affiliates” of $760M is basically things affiliated with Hopkins like a broader network of hospitals and research institutions for services rendered. Yikes. Add that to clinical services and you have a hospital network making way more money than tuition.

“Other revenues” of $190M. No clue as to what this is.

“Endowment payout” of $425M. U.S. non-profit law requires institutions to pay out a fraction of their endowment every year. My guess is that they’d pay out zero if they could, since university presidents are largely incentivized around two things: raising the endowment and raising U.S. News and World Report rankings.

“Auxiliary expense” income of $105M from things like bookstores, housing, and dining. They’re actually turning a huge “profit” on this stuff! Who knew?

“Maryland State aid” of $65M. Basically a drop in the bucket.

“Investment return” of $73M. I don’t know if this is the endowment or other investments.

Page 5: Other changes in net assets with and without donor restrictions

This is mostly investment stuff related to pensions, and investment return. There is one item of note, the contributions.

“Contributions” totalled $2.1B! This is what it sounds like, but the contributions are typically restricted (i.e., they come with strings attached, typically building a building with someone’s name on it or funding some tenure line). And yet I’m guessing they still pester their alumni with annual “donate to Hopkins” letters. I know both Michigan State and Edinburgh never seem to miss a move with their begging.

Overhead

The government is threatening or maybe already has (hard to keep up) reduced overhead rates from 60% to 15% (that means if you apply for $1 of direct costs, you also apply for $0.60 of overhead. The plan will remove 3/4 of the overhead revenue, which for Hopkins was $460M, which means a revenue reduction of $355M.

The much bigger effect will be what’s happening to Columbia with just cuts across the board in NIH and NSF funding. If that hits Hopkins, it’s going to hurt, because that’s a large part of their $2.3B research grant budget.

Was 2023 an outlier?

No. It’s just more of the same in 2024. If you follow the link it has 2023/2024 both. In 2024, the bottom-line “profit” was up to $2.6B!!!

Panic setting in?

Despite this recent success (I didn’t find 2024) they’re panicking and reportedly laying of 2000 employees. If you click through to the article, you’ll realize that’s 250 people “cut” in the U.S. and 1900 internationally, mostly working on international health aid, with another 200 locals “furloughed” (I don’t know exactly what that means in this context).

BOTTOM LINE: JOHNS HOPKINS 2023 BUDGET


  • Research grants and overhead: $2.3B
  • Defense contracting: $2.3B
  • Health care and other services: $1.6B
  • Tuition: $841M
  • Endowment payout: $425M
  • Bookstores, housing, dining: $105M
  • State aid: $64M

Operating “profit”: $410M

Bottom-line “profit”: $2.3B

Which leaves us with the question of what one calls a “profit” at a non-profit? “Changes in Net Assets” is the term of art used in the reports.

Speaking of government waste . . .

We’ve been hearing a lot about wasteful public spending, so this story seems relevant:

Benjamin E. Sasse, former president of the University of Florida, spent unjustified amounts of money on university salaries, events, and contracts, with little to show for the expenses, according to a report the state’s auditor general released Tuesday.

The university failed to show how any of the expenses detailed in the report benefited the university . . .

Sasse’s office spent $14.8 million during the year he was in office, about 72 percent more than his predecessor did the year before.

Among the top payments was a $6.4-million contract with McKinsey & Company, a consulting firm meant to help the university chart a path forward. Neither university records nor employees could show the contract’s benefits to the auditor general’s office, according to the report. . . .

During his short 17-month tenure, Sasse hired 24 employees who held 38 different administrative positions. Several of those employees worked remotely, incurring large reimbursements for travel to the university.

Fourteen of those positions didn’t include job descriptions . . .

One of Sasse’s remote hires was Penny Schwinn, the university’s first vice president for PK-12 and pre-bachelor’s programs, who served less than a year in her position. She earned a salary of $367,500 and incurred about $17,500 in expenses that were paid for by the university, primarily for work travel, according to records The Chronicle obtained via an open-records request.

And here’s the kicker:

Schwinn has since been nominated by President Trump to serve as deputy secretary in the U.S. Department of Education.

There’s more:

The report says the university didn’t “document the reasonableness” of maintaining Sasse’s more than $1-million salary after he stepped down from the presidency in July 2024. He now serves as a part-time professor and an adviser to the university’s trustees. . . . Sasse’s responsibilities in his adviser position “appear to be significantly less in scope” than those of the president, according to the auditor’s report, and “the public purpose of such a salary is not readily apparent.”

And this sleazy bit:

The university . . . noted that about 80 percent of the salary comes from non-state funds, in accordance with state law that caps presidential salaries.

Hey, assholes: Money is fungible. That’s a million bucks that otherwise could’ve gone for university operations.

More fun:

Sasse also spent about $376,000 on flights chartered through the University Athletic Association. The average trip cost $16,820 . . . Other expenses include an hour-long holiday party featuring hot chocolate, cider, peppermint chocolates, and cookies that cost $62,650; a two-hour holiday party for university employees that cost $169,755; and an invitation-only football tailgate that cost about $46,000.

Jeez . . . for $62,650 you think they could’ve afforded some booze . . .

The news article points to Sasse’s post here in defense of . . . actually I’m not sure what he’s defending. As befits his position of retired politician, he spews a lot of platitudes: “fiscal stewardship . . . hardworking taxpayers . . . the salt-of-the-earth people . . . the AI revolution . . . status-quo bureaucracies . . . lifelong learning . . . the first time in human histories . . . an unprecedented consortium . . . brought to fruition . . . disruptive . . . top talent . . .” He calls people “folks.”

The funniest part is when he writes, “Any $9 billion enterprise should always be finding ways to tighten its belt.”

In all those 1700 words, he never said anything about the million dollars he’s getting for his current mini-job, the $6.4 million to the consulting firm that yielded no benefits, the $62,650 party, the $376,000 in flights, or the 72% increase in spending. If that’s belt-tightening, I’d hate to see what happens when the guy goes wild and lets loose!

On the plus side, the University of Florida is now looking for a new president. I heard this guy is available?

How is the integrity crisis in business reporting like the integrity crisis in science?

We’ve talked a lot over the years about the replication crisis in science, but maybe we should be calling it the integrity crisis.

Lack of integrity isn’t quite the same thing as fraud. You can be doing bad science because you’re in a hurry, or because you have some point you want to make, or just because you’re using bad methods, and none of this is necessarily fraud or even “scientific misconduct,” but it still reflects a deficit of scientific integrity. Indeed, you might well be a person of high moral integrity in other aspects of life–you might pay your bills on time, be nice to your family and coworkers, you might even be doing your science with a goal of helping the world–and you might, to the best of your knowledge, be working with accurate data and be using legitimate research methods–but you could be having problems with scientific integrity (dictionary definition: “firm adherence to a code of especially moral or artistic values : incorruptibility”) in that you’re studiously avoiding potential problems with your work. This relates to Clarke’s Law: “Any sufficiently crappy research is indistinguishable from fraud.” When that guy was writing those beauty-and-sex-ratio papers, I didn’t notice any fraud, but I did notice that he avoided confronting the clear problems that had been pointed out with his work. I see this as a lack of integrity. Similarly, when those Nudge authors memory-holed their earlier effusive praise of a now-disgraced food-behavior researcher, that was a lost opportunity for them to learn, also a lack of integrity in that they were avoiding a chance to see how they were on the wrong track. Not fraud, but a deficit of scientific integrity.

I was thinking about this after reading this Columbia Journalism Review article by Sara Silver subtitled, “Understanding money is key to grappling with power. Business journalism isn’t set up for that,” and which begins:

When stock markets closed on January 19, 2021, Netflix posted a press release with its end-of-year financial results. Business is booming, the message went. About a year into the coronavirus pandemic, the company had provided “escape, connection, and joy” to more than two hundred million subscribers. A New York Times journalist had an hour to review Netflix’s earnings report: eleven pages, half corporate-speak, half dense tables filled with accounting metrics–some of them standard, some invented. During after-hours trading, Netflix stock began to rise.

The pressure was on to explain the numbers while standing firm against salesmanship. In its release, Netflix had highlighted operating income. That excluded hefty interest on its debt (the company had borrowed sixteen billion dollars to fund a spree of shows) and the expense of taxes (a year before, Netflix had quadrupled its profit by emptying a tax reserve that it had only recently stuffed). But Netflix’s finance whizzes had turned straw into gold. The Times story duly noted the company’s rising operating income, not the fact that profit was down 8 percent for the quarter–and the drop looked even steeper when compared with the burgeoning sales. . . .

The Times noted that Netflix would still have ten billion to fifteen billion dollars in debt, but said that the company “made enough revenue to pay back those loans while maintaining its immense content budget.” Netflix now makes enough to repay its debt, the story went; “the gambit seems to have worked.”

Silver summarizes:

The article was written, edited, and posted in seventy-eight minutes. Alas, it was wrong.

Ulp.

The business reporting that Silver is criticizing is an example of what I would call a deficit of integrity in business reporting. Again, the claim is not that the reporters are committing fraud or even knowingly making an error–as Silver puts it, “No one is dumb in this story. Nor is the situation unique . . . Consistently, and across the press, the rush to cover corporate earnings sets journalists up for failure.”

The rest of Silver’s article discusses the conditions under which these reporting problems arise:

Earnings coverage did not always work this way. In the nineties, when I [Silver] started out as a business reporter, journalists could lean on sell-side analysts, as they were known, who worked for banks that issue stocks and bonds for companies. It was their job to help us understand corporate strategy and to warn of red flags. They toured factories and took meetings with executives; they connected the numbers on an earnings report to the actions of CEOs, supply chain snafus, demographics, and emerging technologies. . . .

Seasoned reporters found ways to sidestep obvious conflicts of interest, knowing that analysts would recommend that clients buy stocks their banks were selling. In effect, Wall Street was policing itself. . . . [But] banks began axing their research departments, just as publishers were shrinking newsrooms. The financial crisis of 2008 and 2009 led to more bloodletting and consolidation, with fewer companies listed on public stock markets, and fewer people buying individual stocks than a slice of the market through index funds. . .

It’s an assembly line: One person listens to quarterly earnings calls with executives. Another checks news that could affect the company’s customers, suppliers, or competitors. One plugs earnings numbers into a spreadsheet. Only the head of the team makes calls–and the calls are narrow, predicting short-term earnings. These judgments are made within minutes. Earnings often beat expectations–mostly in alignment with companies’ guidance–which enables analysts to recommend that clients buy shares. That, in turn, sends stock prices up. For journalists, these recommendations are mostly useless.

And yet few business reporters, even at elite news organizations, understand the implications of all this change. Market news sites still churn out stories on individual stocks based on sell-side research. When a company’s numbers beat Wall Street estimates (set within a range the company provided), that typically becomes a story’s lede.

This is all interesting in itself, but also it reminds me so much of . . . the process of scientific publication and review: a push toward binary judgments (positive or negative), a trust in intermediate authorities, a hollowing-out of expertise, conflicts of interest.

And the integrity crisis, which is deeper than mere fraud and corruption.

The R-squared on this is kinda low, no? (Nobel prize edition)

An economist who would prefer anonymity points to the above wacky graph of a “robust regression.” It’s from a paper written by 2 out of the 3 recent Nobel prize winners in economics!

The full paper is here, and my correspondent points us to p. 921 of the published version.

My correspondent writes:

They can do what they want. They ignore all criticism, even when repeated by mainstream economists in whispers.

“They ignore all criticism” seems pretty standard in science. I guess the only hope is for the field to advance through external criticism. In that sense, it’s fine for questionable papers to be published, as long as data and code are made available and as long as the journals do not hold criticism to higher standards than the original work.

Unfortunately, giving out Nobel prizes is kind of the opposite of criticism (and here’s another recent example).

Our proposal for scheduled post-publication review: “Even if each review took twice the effort of the average pre-publication review, our system would add only 1 percent to the total reviewing effort, while providing important perspectives on papers representing more than one-quarter of the citations received by these influential journals.”

The current system of scholarly journal review is absolutely nuts. The vast majority of review effort goes to papers that nobody reads. We can do better via scheduled post-publication review, for example every time a paper reaches its 250th citation, the journal commissions an outside review–not with the goal of retracting the paper, but to provide a new perspective. If the paper has been cited 250 times, it’s worth getting that new take–and it’s rare enough that a published article reached that level of citation, so the total cost of these new reviews would be low, only a small fraction of the cost of existing pre-publication review.

Andy King and I present the idea:

Problems with the credibility of empirical research have been discussed for decades.

The most common prescriptions for improving credibility — better review, public critique, and replication — have merit, but they fail to direct scarce resources where they can do the greatest good: those few publications with the greatest impact.

We propose an alternative, using review resources more efficiently and effectively by borrowing the idea of “replay review” from professional sports. The current peer-review system would continue to judge research articles in real time when they are submitted, but the publications that go on to have an outsized impact would be evaluated again, and in more detail, to confirm or refine the initial assessment.

Here’s the key insight:

All proposals for strengthening primary review face an economic challenge: Most of the resources spent on strengthening it are wasted because, for most submissions, the existing review process is already strong enough. At top journals, 90 percent or more of the submissions are rejected for apparent flaws, and thus, a strengthened review process will not change the outcome. Of the 10 percent that are accepted and published, most are lightly read and cited and thus have little influence.

And this is what we recommend:

Once a publication receives a specified number of citations, it would receive an independent review. These reviews would then be published in full, along with author responses, so that readers have additional guidance on how to interpret the initial publication. . . .

We crunched some numbers:

To assess the practicality of our proposal, we evaluated the submission and citation history of articles published in the 2014 cohort of the peer-reviewed empirical journals selected by the Financial Times for determining the research rank of business schools.

Skewness in citation rates means that a large proportion of the citation impact can be checked at a relatively low cost. Replay review of just those articles receiving more than 250 citations would mean that publications accounting for 28 percent of all the citations would be checked through further review. Even if each review took twice the effort of the average pre-publication review, our system would add only 1 percent to the total reviewing effort, while providing important perspectives on papers representing more than one-quarter of the citations received by these influential journals.

This is an elaboration of my efficiency argument for post-publication review.

P.S. The Chronicle of Higher Education gave our article the title, “Social Science Is Broken. Here’s How to Fix It.” We didn’t choose that title! Our recommendation was, “Social Science Needs Replay Review.” But those of you involved in the news media know that the authors of an article are rarely are in charge of the title. We like our suggestion of post-publication review, but we’re under no illusion that it would “fix” social science. It’s just one small part of the picture.

P.P.S. We also thank David Wescott at the Chronicle for editing our article.

Conflicts of interest in vaping studies. (Yes, researchers will risk their professional reputations out of some combination of political motivations, irritation, and money.)

Paul Alper points us to this news report that states:

Two New York University (NYU) professors were recently found to have collaborated directly with executives from the vaping company Juul without disclosing these relationships to academic journals or Congress. This revelation came to light during a STAT investigation. At a time when the youth vaping crisis was at its peak, these professors, David Abrams and Ray Niaura, emerged as authoritative voices defending vaping as an effective public health strategy for adults to cut back or quit smoking, despite its growing popularity among youth.

David Abrams, a frequent commentator on vaping in the news media, coordinated extensively with Juul on public messaging in 2017 and 2018. He asked Juul officials for talking points, allowed company executives to review an academic article before publishing, and attended Juul scientific advisory board meetings—all without disclosing these connections to journal publishers or the public.

Ray Niaura was also involved in collaborating with Juul executives but did not disclose these relationships.

Alper asks:

Do some people have a secret death wish? Do Abrams and Niaura not realize that their public utterances will be scrutinized? Yet once again, all that fuss over Francesca Gino’s supposed manipulations do not compare with what harm is being done in other fields involving cancer and cancer causing products.

I have two responses.

1. Researchers have seriously damaged their reputation by working for cigarette companies, even without any suggestion of research misconduct or hidden conflicts or interest. Two prominent examples are R. A. Fisher and Donald Rubin, both of whom have been hugely influential in statistics and many applied fields, and remain very respected, but still come off looking pretty foolish for their notorious edgelord positions on smoking and health (for example, here’s Fisher referring to anti-smoking campaigns as “terrorist propaganda,” and I personally heard Rubin argue that smoking is not addictive). For some reason, prominent statistician Ingram Olkin isn’t so notorious in this regard, perhaps because he just took cigarette money and kept quiet about it, without than taking any strong stances on the issue.

In regard to Alper’s questions, I don’t think Fisher, Rubin, or Olkin had any professional “death wishes”; I’m guessing they were motivated by some combination of political views, irritation at the statistical weaknesses of some anti-smoking arguments, and money. And, hey, these are strong motivations for me too! Not regarding smoking, but on other statistical and policy issues I’ve written about.

2. Regarding the specifics of the case, this news is not surprising. The names David Abrams and Ray Niaura rang a bell. From my post in Feb 2020 on the topic:

I looked up David Abrams, the first author of the letter sent to the journal. He’s a professor of public health at New York University, his academic training is in clinical psychology, and he’s an expert on cigarette use. For example, one of his recent papers is, “How do we determine the impact of e-cigarettes on cigarette smoking cessation or reduction? Review and recommendations for answering the research question with scientific rigor,” and another is “Managing nicotine without smoke to save lives now: Evidence for harm minimization.” A web search brought me to this article, “Don’t Block Smokers From Becoming Smoke-Free by Banning Flavored Vapes,” on a website called Filter, which states, “Our mission is to advocate through journalism for rational and compassionate approaches to drug use, drug policy and human rights.” Filter is owned and operated by The Influence Foundation, which has received support from several organizations, including Juul Labs, Philip Morris International, and Reynolds American, Inc. . . .

So, several of the people involved in this controversy have conflicts. In their letter to the journal, Abrams et al. write, “The signatories write in a personal capacity and declare no competing interests with respect to tobacco or e-cigarette industries.” I assume this implies that Abrams is not directly funded by Juul Labs, Philip Morris International, etc.; he just writes for an organization that has this funding, so it’s not a direct competing interest. But in any case these researchers all have strong pre-existing pro- or anti-vaping commitments.

Hmmm . . . so maybe this was a competing interest! If Abrams and Niaura were asking Juul officials for talking points, allowed company executives to review an academic article before publishing, and attended Juul scientific advisory board meetings . . . it does sound like they were expecting compensation from Juul in some form or another, right? I can’t really say. The linked news article is paywalled so I don’t know what is in these documents that they cite.

In any case, if the researchers really “attended Juul scientific advisory board meetings,” then, yeah, that sounds like a conflict of interest to me. Much more of a conflict than simply publishing an article on a website funded by Juul.

Supporting research because it’s cool or because it’s useful

In a recent post, I wrote, “I study social science because it’s interesting and important, not because I think there are buttons we can push.”

Sean Manning adds this:

I think this touches on a basic division. A lot of us would be happy to do and popularize cheap research which gets public funds for the same general reason that arenas and golf clubs and parks get public funds: some people enjoy using them, and more people enjoy watching people using them. This faces direct opposition from neoliberals (people who think that all institutions should be modelled on for-profit corporations) who disagree that satisfying curiosity is a good, but indirect opposition from people who want lots of money for their research and make grandiose claims about how useful their research is. The more institutions become dependant upon funds obtained by promising results, the more they spend and the more expensive that cheap research looks. Lots of people with academic jobs give up applying for grants or refunds because the paperwork created to manage million-dollar grants is so time consuming on a thousand-dollar grant and the delay between applying and actually getting to do the work drains their energy.

One of Richard Feynman’s colleagues already felt ashamed in 1974 to say that he did research because knowing the answer would be cool. And there are books about how the American research university as we know it today grew out of research for the military during WW II.

It’s kind of political, no? Universities are disliked on the left for perpetuating inequality and disliked on the right for providing jobs to a bunch of left-wingers. On the other hand, scientific research is generally thought of as a good thing (with some exceptions such as research on offensive weapons).