Against overly restrictive definitions: No, I don’t think it helps to describe Bayes as “the analysis of subjective
 beliefs” (nor, for that matter, does it help to characterize the statements of Krugman or Mankiw as not being “economics”)

I get frustrated when people use overly restrictive definitions of something they don’t like.

Here’s an example of an overly restrictive definition that got me thinking about all this. Larry Wasserman writes (as reported by Deborah Mayo):

I wish people were clearer about what Bayes is/is not and what 
frequentist inference is/is not. Bayes is the analysis of subjective
 beliefs but provides no frequency guarantees. Frequentist inference 
is about making procedures that have frequency guarantees but makes no 
pretense of representing anyone’s beliefs.

I’ll accept Larry’s definition of frequentist inference. But as for his definition of Bayesian inference: No no no no no. The probabilities we use in our Bayesian inference are not subjective, or, they’re no more subjective than the logistic regressions and normal distributions and Poisson distributions and so forth that fill up all the textbooks on frequentist inference. See chapter 1 of BDA for lots of examples of prior distributions that are objectively assigned from data. Here’s my definition of “Bayesian.” Science in general has both subjective and objective aspects. Science is always full of subjective human choices, and it’s always about studying larger questions that have an objective reality.

Now, don’t get me wrong—there are lots of good reasons for wanting to avoid the use of prior distributions or to use various non-Bayesian methods in different applications. Larry writes, “In our
 world of high-dimensional, complex models I can’t see how anyone can 
interpret the output of a Bayesian analysis in any meaningful way,” and I have no doubt of his sincerity. I myself have difficulty interpreting the output of non-Bayesian analyses in the high-dimensional, complex models that I work on—I honestly find it difficult to think about non-Bayesian estimates of public opinion in population subgroups, or a non-Bayesian estimate of the concentration of a drug in a complex pharmacology model—but I accept that Larry’s comfort zone is different from mine, and I think it makes a lot of sense for him to continue working using methods that he feels comfortable with. (See here for more of this sort of talk.) So, it’s fine with me for Larry to report his discomfort with Bayesian inference in his experience. But please please please don’t define it for us! That doesn’t help at all.

To get back to Larry’s definition: Yes, “the analysis of subjective beliefs” is one model for Bayes. You could also label classical (or frequentist) statistics as “the analysis of simple random samples.” Both definitions are limiting. Yes, Bayes can be expressed in terms of subjective beliefs, but it can also be applied to other settings that have nothing to do with beliefs (except to the extent that all scientific inquiries are ultimately about what is believed about the world). Similarly, classical methods can be applied to all sorts of problems that do not involve random sampling. It’s all about extending the mathematical model to a larger class of problems.

I do think, by the way, that these sorts of interactions can be helpful. I don’t agree with Larry’s characterization of Bayesian inference, but, conditional on him believing that, I’m glad he wrote it down, because it gives me the opportunity to express my disagreement. Sometimes this sort of exchange can help. For example, between 2008 and 2012 Larry updated his believes regarding the relation of randomization and Bayes. In 2008: “randomized experiments . . . don’t really have a place in Bayesian inference.” In 2013: “Some people say that there is no role for randomization in Bayesian inference. . . . But this is not really true.” This is progress! And I say this not as any kind of “gotcha.” I am sincerely happy that, through discussion, we have moved forward. Larry is an influential researcher and explicator of statistics, and I am glad that he has a clearer view of the relation between randomization and Bayes. From the other direction, many of my own attitudes on statistics have changed over the years (here’s one example).

In case this helps, here are some of my thoughts on statistical pragmatism from a couple years ago.

And, just to be clear, I don’t think Larry is being personally aggressive or intentionally misleading in how he characterizes Bayesian statistics. He’s worked in the same department as Jay Kadane for many years, and I think Jay sees Bayesian statistics as being all about subjective beliefs. And once you get an idea in your head it can be hard to dislodge it. But I do think the definition is aggressive, in that it serves to implicitly diminish Bayesian statistics. Once you accept the definition (and it is natural for a reader to do so, as the definition is presented in a neutral, innocuous manner), it’s hard to move forward in a sensible way on the topic.

P.S. Larry also writes of “Paul
 Krugman’s socialist bullshit parading as economics.” That’s another example of defining away the problem. I think I’d prefer to let Paul Krugman (or, on the other side, Greg Mankiw) define his approach. For better or worse, I think it’s ridiculous to describe what Krugman (or Mankiw) does as X “parading as economics,” for any X. Sorry, but what Krugman and Mankiw do is economics. They’re leading economists, and if you don’t like what they do, fine, but that just means there’s some aspect of economics that you don’t like. It’s silly to restrict “economics” to just the stuff you like. Just to shift sideways for a moment, I hate the so-called Fisher randomization test, and I also can’t stand the inverse-gamma (0.001, 0.001) prior distribution—but I recognize that these are part of statistics. They’re just statistical methods that I don’t like. For good reasons. I’m not saying that my dislike of these methods (or Larry’s dislike of Krugman’s economics) is merely a matter of taste—we have good reasons for our attitudes—but, no, we don’t have the authority to rule that a topic is not part of economics, or not part of statistics, just because we don’t like it.

Oddly enough, I don’t really have a problem with someone describing Krugman’s or Mankiw’s writing as “bullshit” (even though I don’t personally agree with this characterization, at least not most of the time) as with the attempt to define it away by saying it is “parading as economics.” Krugman’s and Mankiw’s writing may be bullshit, but it definitely is economics. No parading about that.

P.P.S. The above post is from 2014. When it appeared, Jordan Ellenberg commented that I should consider posting it once a year. I haven’t done that, but it’s been six years now, so I thought it was worth posting again. Also, in the years since the appearance of the above post, Christian Hennig and I wrote our paper, Beyond subjective and objective in statistics.

58 thoughts on “Against overly restrictive definitions: No, I don’t think it helps to describe Bayes as “the analysis of subjective
 beliefs” (nor, for that matter, does it help to characterize the statements of Krugman or Mankiw as not being “economics”)

  1. This reminds me of Pat Metheny’s point, in his classic anti–Kenny G rant, that it’s not helpful for critical purposes to characterize KG as being “not actually jazz.” Better to let it be jazz, because then you can use the whole apparatus of jazz criticism to explain why it sucks.

    Having said that, it has gotten me to thinking lately why so many jazz musicians (myself included, given the right “bait” of a question, as I will explain later) and audiences have gone so far as to say that what he is playing is not even jazz at all. Stepping back for a minute, if we examine the way he plays, especially if one can remove the actual improvising from the often mundane background environment that it is delivered in, we see that his saxophone style is in fact clearly in the tradition of the kind of playing that most reasonably objective listeners WOULD normally quantify as being jazz. It’s just that as jazz or even as music in a general sense, with these standards in mind, it is simply not up to the level of playing that we historically associate with professional improvising musicians. So, lately I have been advocating that we go ahead and just include it under the word jazz – since pretty much of the rest of the world OUTSIDE of the jazz community does anyway – and let the chips fall where they may.

    • Agreed.

      Equally, that means that the argument then becomes “In my view Kenny G is not very good at Jazz, for reasons X, Y, and Z”. Ideally those reasons are very good and clearly explained, though I don’t know enough about jazz to fill in that list. I think we also want to avoid the “Kenny G Jazz is bad Jazz” simplification, just as we would want to avoid the simplification that “All Paul Krugman economics is bad economics”.

      The trick is of course that you’re reading and evaluating the quality of different sources of information in a bayesian way as well. Once I’ve heard a lot of Kenny G and my priors have been updated many many times, from a strictly personal bayesian perspective I may be prepared to simply state that “Kenny G isn’t very good”. That’s not very informative of course, it’s just that those priors keep going down each time I get new information.

  2. Comments on terminology from Jaynes (1985): https://bayes.wustl.edu/etj/articles/random.pdf

    One critic states that my terminology is nonstandard and can mislead. He fails to note that this applies only to the 1957 papers; and even there my terminology was standard when written. It is, for example, essentially the same as that used by Jimmy Savage (1954). It is not our fault that Latter-Day Commentators, in ill-advised attempts to “classify Bayesians”, have scrambled the meanings of our words, producing a language appropriately called NEWSPEAK. To translate, we note a few approximate equivalences between stardard terminology of the 1950’s and NEWSPEAK:

    1950’s – NEWSPEAK
    objective – orthodox
    subjective – objective
    personalistic – subjective
    (….)

    Because of this utter confusion that has been visited upon us, it is today misleading to use the terms “subjective” and “objective” at all unless you supply the dictionary to go with them. My more recent works have used them only in such phrases as “subjective in the sense that —.”

    My papers of 1957 used the term “subjective” not only in the superficial sense of “not based on frequencies”, but in a deeper sese as is illustrated by the following statement of position:

    In relativity theory we learn that there is no such a thing as “absolute” or “objective” simultaneity. Nevertheless, each observer still has his “subjective” simultaneity, depending on his state of motion; and this, being a consequence of using a coordinate system, is just as necessary in describing his experiences as it was before relativity. The advance made by relativity theory did not, then, lie in rejecting subjective things; but rather in recognizing that subjective character, so that one could make proper allowance for it, the “allowance” being the Lorentz transformation law, which shows how the coordinates we assign to an event change as we change our state of motion.

    It has seemed to me from the start, that a scientist should have just the same attitude toward probability. There is no such thing as “absolute” or “physical” probability; that is just as much an illusion as was absolute simultaneity. Yet each of us still has his “subjective” probability, depending on (or rather, describing) his state of knowledge; and this is necessary for conducting his reasoning. Achievement of rational thinking does not lie in rejecting “subjective” probabilities, but rather in recognizing their subjective character, so that we can make proper allowance for it, the “allowance” being Bayes’ theorem, which shows how the probability we assign to an event changes as we change our state of knowledge.

    The phrase “reasonable degree of belief” originates from Jeffreys, although some try to connect it to me. My admiration for Jeffreys has been expressed sufficiently (Jaynes, 1980); but this terminology created some of his difficulties. That innocent looking little word “reasonable” is to some readers as the red flag to the bull; the adrenalin rises and they miss the substantive content of his message. Jimmie Savage (1954) used instead “measure of trust” which I felt had both economic and emontional overtones. Therefore I have used the less loaded phrase “degree of plausibility” which seems to express more accurately whan inference really involves.

        • Zhou:
          On the other hand it does focus on what is a good choice for the (Frequentist) reference set and even if its not usually appropriate – sometimes it is (or at least adequate).

          More generally, I have come to think its all about representing what would repeatedly happen adequately. In Frequentist approaches, what data would repeatedly be observed, in Bayes, what parameters would be repeatedly encountered.

          I think the problem Andrew is pointing is the advertising of the method as good regardless of context.

        • The data that would be repeatedly observed depends on the parameters that would be repeatedly encountered.

          If you don’t really know the distribution of the latter, you can try to make a reasonable guess at it (pure Bayes) or try to play it safe (frequentist).

    • An old criticism is Basu, D. (1980). Randomization analysis of experimental data: The Fisher randomization test, JASA, 75, 575-582 + Discussion.

      Personal thoughts: I was a grad student when this came out and it was required reading for discussion in my Stat Methods class. My own opinion is that the idea of randomization tests is useful, especially when the randomization is more complicated than simple randomization (e.g. adaptive randomization). But its not the penultimate frequentist test for randomized studies.

  3. I think labeling “classical (or frequentist) statistics as ‘the analysis of simple random samples.'” is too loose. I would say frequentist probability (frequentist statistics is murkier) analyzes point or interval estimators across simple random samples, in line with its definition of probability as the proportion of times something happens in the limit as the number of randomizations goes to infinity. And I think it is fine to say something like the probability distribution of the sample mean across simple random samples is asymptotically normal without invoking beliefs. And you can get the guarantees that Wasserman refers to, although rarely for finite samples.

    But I tend to agree with Wasserman’s opposition to people trying to blur the distinction between frequentist and Bayesian analysis; the latter using probability to describe unknown parameters, missing data, and other things that are not even random variables from a frequentist perspective because they do not have distributions that are induced by randomization.

    I don’t think it is sufficient to say anything that utilizes Bayes’ Rule is Bayesian. To me, analyzing the properties of the posterior mean or mode across simple random samples is frequentist, even through you have to use Bayes’ Rule to get the posterior mean or mode as a point estimate. Also, dividing a joint probability by a marginal probability to get a conditional probability is frequentist if you also insist that the probabilities can only be estimated with the proportion of times something was observed. Conversely, obtaining a (perhaps penalized) maximum likelihood estimate and interpreting it as the center of a multivariate normal posterior distribution is Bayesian, even though Bayes’ Rule was not used and even though the posterior distribution you would have obtained if you had used Bayes’ Rule may not be that close to a multivariate normal.

    It also seems to me that Andrew’s version of frequentist analysis — roughly, obtain a point estimate by optimization, skip the testing of any null hypothesis, and proceed as if the point estimate were correct while recognizing that it is incorrect — is what Larry would call supervised learning because there is no need to introduce the probability distribution of a statistic across simple random samples under the null hypothesis if you are not going to test it.

    • Fair enough about not blurring the distinctions: but different aspects of an analysis can be Bayesian/frequentist – say in your example, analyzing the [frequentist] properties of a [Bayesian] posterior mean across random samples. Hence better to describe which *methods* are Bayesian/frequentist (or both) and whether in any particular analysis Bayesian/frequentist methods are used together, or not.

      • I have a hard time thinking of an example of applied research that should involve both frequentist and Bayesian analysis simultaneously because I would agree with Wasserman that their goals are different and largely incompatible. Using a posterior mode as a point estimator because it has good frequentist properties in some cases isn’t Bayesian in my mind (or Wasserman’s). The closest non-research example I can think of is poker, where you have a frequency distribution of hands that is induced by randomly shuffling the deck and then a sequence of Bayesian decision theory problems about whether to fold / call / raise when it is your turn. And there is a clear sense that if your strategies are sound, then you will make money in the aggregate if you play thousands of somewhat independent poker hands against inferior players, even though you know you will lose money on some of them no matter how well you play.

        A scientific study is mostly like an isolated hand of poker, where there might be some randomness due to sampling or treatment assignment, and you update your beliefs about the data-generating process in light of what you believed about it from the outset. But the frequentist agenda is to impose additional rules on how each individual scientific study is conducted in order to achieve collective properties across thousands of studies done by many different researchers that are randomly sampling from different populations and estimating different parameters in order to test different hypotheses. If you want to, or are forced to, contribute to that ensemble, then you have to follow the frequentist rules, especially the one about never making probabilistic statements about parameters but rather making probabilistic statements about a statistic conditional on the parameters you are estimating. If you don’t care about any of that, I don’t see any reason to accept the frequentist rules, in which case you can just go ahead and condition on the data at hand and update you beliefs about the process that generated that data.

        So, in order to be in a research situation where you would want to do a frequentist analysis and a Bayesian analysis simultaneously you would have to care about collective properties a little bit but not exclusively. Maybe some people identify with that, but you still would have to follow the frequentist rules and then break them when you do the Bayesian analysis, which goes back to the question of whether the frequentist rules are a good thing? I think it is more plausible that some people in your audience think the frequentist rules are a good thing and some think the frequentist rules are a bad thing, so you put both the frequentist analysis and the Bayesian analysis into a paper in an attempt to appeal to both camps. But that is a question of publishing / career strategy rather than the relative merits of incompatible approaches.

        • Since we’ve been talking about the meaning of words lately (like “socialism”), I have to ask what you mean by frequentism. If you are referring only to the Newman-Pearson error-control procedures then sure, probabilistic statements about parameters are not really part of it. But if you also mean frequentism to include the Fisherian p-value then you can make such statements, either vaguely via Fisher’s dysfunction or explicitly via a likelihoodist p-value interpretation.

        • > If you want to, or are forced to, contribute to that ensemble
          Why would you not want to contribute to that ensemble?

          Science is not a solitary activity and so one study on it’s own is simply not science.

          Now, when there are few studies and its appropriate to do more, strict adherence is likely not that important and then when there are enough studies – not that restrictive.

  4. At the risk of positing overly restrictive definitions myself, I don’t see any way that Krugman’s economic views fit under any reasonable definition of socialism. He is a social democrat, to be sure. But Krugman is, without doubt, a capitalist. He does not advocate for ownership of the means of production by the workers. And I think that any “definition” of socialism that does not include that restriction is both historically wrong and probably intentionally misleading, an attempt to smear anything to the left of Milton Friedman as communist-inspired.

    • “He does not advocate for ownership of the means of production by the workers.”

      Perhaps. But regulating the means of production and value and type of compensation through government is an even better deal for workers than owning companies: they can control them without carrying the financial risk.

      • Uh huh. Yea because workers don’t take on any risks and obviously have way more power than the capitalists in such social democratic hellholes as Norway.

        • Workers take financial risk to the extent that they invest in / own a company. From what I can see, workers/unions prefer to dictate operations through bargaining rather than ownership for good reason: ownership would force them to own failure.

        • Jim, safe to say that we have irreconcilable differences of view over this, and I say this as someone who has owned a small business and whose wife runs two.

    • It seems to me that Socialism has come to mean the government guaranteeing all citizens a certain set of economic resources, regardless of the mechanism by which that guarantee comes about.

      It doesn’t much matter if workers own the means of production if the government taxes capital owners sufficiently that they can hand out free health care, child care, subsidized transportation, and education… I think this was Jim’s point as well. Basically for the final outcome, ownership is fungible with taxation and redistribution.

      • One can certainly agree that in terms of improved well-being for workers, sure the outcomes are similar. But there is a very fundamental distinction that I cannot agree to blur between democratic (or autocratic committee a la earlier CCP and USSR) ownership over “the means” versus redistributive taxation (that is mathematically necessary to prevent inequality from tending towards infinity, see e.g.: https://ergodicityeconomics.com/2017/08/14/wealth-redistribution-and-interest-rates/) that is put towards ensuring a broad social welfare state.

        I know Andrew’s post is title “against overly restrictive definitions”, but this is clearly a case where you muddy the waters too much by equivocating between social democracy and any historically grounded definition of socialism. That they share certain humanistic goals does not mean they do not diverge substantially in terms of means and ends.

        • Language is an interesting thing… It seems to me that it’s too late to insist on something like “Socialism means worker/government ownership of the means of production” though I note that this is in fact where Wikipedia lands firmly:

          “Socialism is a political, social, and economic philosophy encompassing a range of economic and social systems characterised by social ownership[1][2][3] of the means of production[4][5][6][7] and workers’ self-management of enterprises.[8][9] It includes the political theories and movements associated with such systems.[10] Social ownership can be public, collective, cooperative or of equity.[11] While no single definition encapsulates many types of socialism,[12] social ownership is the one common element.[1][13][14] “

          I just think that if you asked the average say 25 year old today for an example of socialism… they’d probably point you at Sweden or Germany or The Netherlands or similar. They certainly wouldn’t point you at Nazi Germany, though the Nazi party was all about “National Socialism”. And they wouldn’t point you at the USSR because that’d be called Communism.

          And the examples I mention, Sweden etc are FAR from worker/govt ownership of production. So it can be useful to make some kind of jargon definition for use in political theory or whatnot like “Original Socialism”, but today the everyday meaning of the word is set by what people mean when they use the word… and I don’t think it’s ownership of production.

        • This reminds me of David Graebers discussion of everyday communism. If socialism means ‘taking care of people’ then I’m 100% on board :) And I agree that lingo in US is shifting towards this. Still even the Democratic Socialism of the Nordics involves a lot more public ownership of key industries than the kind of social democracy that say Sanders promotes or rooted in New Deal etc. That’s why it’s a ‘mixed’ economy.

        • For me a big part of UBI is that it’s taking care of people in a super efficient market driven way. You might argue that what we have now is communism for the poor people: government provided housing projects, govt provided food kitchens, govt provided pre-schools etc. The alternative is to stop having the govt decide what everyone needs, and let people make those decisions themselves: namely cash. ;-)

          What I don’t think is viable is “let them live under a bridge and shoot up heroin if they can’t get their act together” which is the implied alternative in the US.

          We need some new terminology, something like “social capitalism” or something

        • “You might argue that what we have now is communism for the poor people: government provided housing projects, govt provided food kitchens, govt provided pre-schools etc.”

          My impression is that “govt provided food kitchens” are very few and far-between, compared to the other items in the list. (The only government provided food kitchens I can think of are the school related programs of free breakfasts and lunches). Is there something I am missing?

        • School lunches alone are apparently a huge fraction of all food consumption in the US. I was reading about that during the early days of the COVID thing, when they shut schools it was a bit of a nightmare in terms of how many kids were going to go without sufficient food.

          But it’s true that most food kitchens themselves are NGO. Still, the Govt gives people food credits (SNAP / food stamps / EBT). Which is a way of the government deciding on the quantity and type of consumption, which is a communist idea, the govt figures out how much of each thing each person needs…

        • Daniel said,
          “School lunches alone are apparently a huge fraction of all food consumption in the US. I was reading about that during the early days of the COVID thing, when they shut schools it was a bit of a nightmare in terms of how many kids were going to go without sufficient food.”

          In Austin, school lunches and breakfasts are continuing while school is closed: See https://www.austinisd.org/covid19/meals (The site says that weekend meals and meals for caretakers are also provided.)

          (I don’t know how many school districts are doing this, and I’m not sure how the Austin program is being funded. The website above seems to suggest that it is partly funded by donations to a Crisis Support Fund.)

        • Not to get too deep into the politics of economics, but I actually think there are a huge number of other alternatives, most of which are not being explored due to some combination of political partisanship, following other nations’ models that seem to work/historical contingency, and “sunk cost fallacy”.

          And that the alternatives that are likely to be best going forward will not be the ones currently getting a lot of attention, since most *political* discourse about economics trails well behind what the economy has actually become.

          The whole capitalism/socialism/communism “spectrum” comes out of a late 19th and early-to-mid 20th century industrial economy. As we move into a post-industrial economy I think it will become more and more irrelevant. Right now, I think lower labor costs overseas are suppressing a lot of the really radical technology-induced shifts in the economy, but that will probably not last much longer.

          Also, I think historical contingency is fairly huge here. The Great Depression shaped the “Western” world’s ideas about prosperity and poverty more than I think is really wise, given how different the economy is now vs. 80 years ago.

        • confused, it sounds like you inhabit a similar region of economic idea space to the one I’m wandering around in. Unfortunately countries seem to change on the scale of 2 or 3 lifetimes while technology changes on the scale of 5 years.

        • Similar ways of thinking about it, maybe, but probably quite different conclusions. I am pretty skeptical of things like UBI at this time. It might well be a component of a really good solution, eventually, but I think it would be detrimental now without the other components.

          And I think the most important “other components” are technologies that don’t exist at a market-ready level yet, and others of which are spreading through the market but not yet significant enough to change the overall economic picture. So it can’t really be hurried, at least at the government level. (Some private efforts probably can hurry it, perhaps unintentionally, by pursuing high risk/high reward technologies. But unfortunately, this is too politically risky for governments to do unless it is militarily valuable.)

        • Definitely not. I think it would take far more space… and several years to see what happens with certain technologies now developing :)

  5. Is this a more acceptable definition?

    Frequentist properties uniformly characterize the sampling distribution of a statistical procedure in the context of a given probability model. Bayesian properties characterize a statistical procedure in terms of the set of joint distribution-loss function pairs which reproduce the procedure as a generalized Bayes rule.

    Extreme frequentists evaluate candidate statistical procedures exclusively in terms of their frequentist properties. Extreme Bayesians evaluate candidate statistical procedures exclusively in terms of their Bayesian properties. In between the extremes, there is a continuum who care about both frequentist and Bayesian properties in varying proportions.

      • I think you’re specifically talking about minimax theory here. I’m not sure that’s how a frequentist would characterize frequentism. Minimaxity is a frequentist property, but it’s not the only one and not one that frequentists collectively care about. What they collectively care about is frequentist properties. Beyond this there’s a variety of perspectives. Just as Bayesians collectively care about Bayesian properties, but differ in terms of which properties they care about.

        A Gelmanian and a subjectivist can agree that how diffuse the implied prior behind a procedure is matters to whether it makes sense to use that procedure. Still, the subjectivist might care about whether that diffusion accurately represents her beliefs, while the Gelmanian might care about whether it interacts with the likelihood to produce calibrated posterior predictions. This is the sense in which they’re both Bayesians.

        • “I think you’re specifically talking about minimax theory here. I’m not sure that’s how a frequentist would characterize frequentism.”

          Good point. Frequentist methods usually are not explicitly presented in terms of minimaxity, though point estimation might be a notable exception.

          That said, you’ll find the worst-case mindset lurking everywhere. In testing, some kind of Type I error rate must be controlled at, say, the 5% level, even in the worst case, at least asymptotically. In interval estimation, the coverage rate must be at least, say, 95%, even in the worst case, at least asymptotically.

          Bayesians, with notable exceptions such as some robust Bayesians, tend to focus less on the worst case distribution. That is the real difference, not a concern about performance over repeated sampling, which matters to everyone. At the pure Bayes extreme, a single prior is used with the expectation that it will perform better *over repeated sampling* in actual real world cases than would a worst-case method.

          (Here, I’m excluding the machine learning “culture” from both frequentism and Bayesianism.)

  6. Re: Krugman and Mankiw not economics, this is a tough one because economics means two things: economic science (academics) and economic policy (via Govt, newspapers, think tanks). When they write in the NY Times or Wall St J, they’re doing the latter, and it’s obviously not the same as publishing in Econometrica or American Economic Review. So is it economics? Yes. And no.

      • I think the thing about Krugman is that he’s an economist who says things that no economists actually believe, including himself when he has his professional economist hat on.

        Like the broken window fallacy: that the economy would be stimulated by the destruction of the WTC towers.

        “First, the driving force behind the economic slowdown has been a plunge in business investment. Now, all of a sudden, we need some new office buildings. As I’ve already indicated, the destruction isn’t big compared with the economy, but rebuilding will generate at least some increase in business spending.” https://www.nytimes.com/2001/09/14/opinion/reckonings-after-the-horror.html

        This is literally Bastiat 101. They broke more than a few windows… Overall it’s a net loss. Do some builders benefit? Yes, but the world loses. This is obvious. Any resources spent then on rebuilding the trade center towers was money not spent on doing something else… and then we’d have had WTC and that other thing…

        Krugman… yeesh…

        • Daniel:

          Unfortunately, saying something nonsensical in public does not disqualify someone from being an economist. Recall Gary Becker’s “suicide” remark. Like it or not, a big part of modern economics is being counterintuitive—and “counterintuitive” can easily shade into just plain foolish.

Leave a Reply to Jim Cancel reply

Your email address will not be published. Required fields are marked *