Skip to content
Search results for falsificationist bayes

Not everyone’s aware of falsificationist Bayes

Stephen Martin writes: Daniel Lakens recently blogged about philosophies of science and how they relate to statistical philosophies. I thought it may be of interest to you. In particular, this statement: From a scientific realism perspective, Bayes Factors or Bayesian posteriors do not provide an answer to the main question of interest, which is the […]

Bayesians are frequentists

Bayesians are frequentists. What I mean is, the Bayesian prior distribution corresponds to the frequentist sample space: it’s the set of problems for which a particular statistical model or procedure will be applied. I was thinking about this in the context of this question from Vlad Malik: I noticed this comment on Twitter in reference […]

I disagree with Tyler Cowen regarding a so-called lack of Bayesianism in religious belief

Tyler Cowen writes: I am frustrated by the lack of Bayesianism in most of the religious belief I observe. I’ve never met a believer who asserted: “I’m really not sure here. But I think Lutheranism is true with p = .018, and the next strongest contender comes in only at .014, so call me Lutheran.” […]

Confirmationist and falsificationist paradigms of science

Deborah Mayo and I had a recent blog discussion that I think might be of general interest so I’m reproducing some of it here. The general issue is how we think about research hypotheses and statistical evidence. Following Popper etc., I see two basic paradigms: Confirmationist: You gather data and look for evidence in support […]

“Two Dogmas of Strong Objective Bayesianism”

Prasanta Bandyopadhyay and Gordon Brittan write: We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. What we call ‘strong objective Bayesianism’ is characterized by two claims, that all scientific inference is ‘logical’ and that, given the same background information two agents will ascribe a unique probability to their priors. We […]

Philosophy of Bayesian statistics: my reactions to Senn

Continuing with my discussion of the articles in the special issue of the journal Rationality, Markets and Morals on the philosophy of Bayesian statistics: Stephen Senn, “You May Believe You Are a Bayesian But You Are Probably Wrong”: I agree with Senn’s comments on the impossibility of the de Finetti subjective Bayesian approach. As I […]

One more time on Bayes, Popper, and Kuhn

There was a lot of fascinating discussion on this entry from a few days ago. I feel privileged to be able to get feedback from scientists with different perspectives than my own. Anyway, I’d like to comment on some things that Dan Navarro wrote in this discussion. Not to pick on Dan but because I think his comments, and my responses, may highlight some different views about what is meant by “Bayesian inference” (or, as I would prefer to say, “Bayesian data analysis,” to include model building and model checking as well as inference).

So here goes . . .

Bayes, Popper, and Kuhn

Since I’m referring to other people’s stuff, let me link to a recent entry on statistics and philosophy in Dan Navarro’s blog. Which I pretty much agree with except for the remark that Bayesian statistics is Kuhnian. I disagree strongly–I think Bayesian data analysis is Popperian. But, then again, Sander Greenland disagrees with me. So […]

Several reviews of Deborah Mayo’s new book, Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars

A few months ago I sent the following message to some people: Dear philosophically-inclined colleagues: I’d like to organize an online discussion of Deborah Mayo’s new book. The table of contents and some of the book are here at Google books, also in the attached pdf and in this post by Mayo. I think that […]

Our hypotheses are not just falsifiable; they’re actually false.

Everybody’s talkin bout Popper, Lakatos, etc. I think they’re great. Falsificationist Bayes, all the way, man! But there’s something we need to be careful about. All the statistical hypotheses we ever make are false. That is, if a hypothesis becomes specific enough to make (probabilistic) predictions, we know that with enough data we will be […]

When does the quest for beauty lead science astray?

Under the heading, “please blog about this,” Shravan Vasishth writes: This book by a theoretical physicist [Sabine Hossenfelder] is awesome. The book trailer is here. Some quotes from her blog: “theorists in the foundations of physics have been spectacularly unsuccessful with their predictions for more than 30 years now.” “Everyone is happily producing papers in […]

Paul Meehl continues to be the boss

Lee Sechrest writes: Here is a remarkable paper, not well known, by Paul Meehl. My research group is about to undertake a fresh discussion of it, which we do about every five or ten years. The paper is now more than a quarter of a century old but it is, I think, dramatically pertinent to […]

Statistics controversies from the perspective of industrial statistics

We’ve had lots of discussions here and elsewhere online about fundamental flaws in statistics culture: the whole p-value thing, statistics used for confirmation rather than falsification, corruption of the pizzagate variety, soft corruption in which statistics is used in the service of country-club-style backslapping, junk science routinely getting the imprimatur of the National Academy of […]

Exchange with Deborah Mayo on abandoning statistical significance

The philosopher wrote: The big move in the statistics wars these days is to fight irreplication by making it harder to reject, and find evidence against, a null hypothesis. Mayo is referring to, among other things, the proposal to “redefine statistical significance” as p less than 0.005. My colleagues and I do not actually like […]

Hypothesis Testing is a Bad Idea (my talk at Warwick, England, 2pm Thurs 15 Sept)

This is the conference, and here’s my talk (will do Google hangout, just as with my recent talks in Bern, Strasbourg, etc): Hypothesis Testing is a Bad Idea Through a series of examples, we consider problems with classical hypothesis testing, whether performed using classical p-values or confidence intervals, Bayes factors, or Bayesian inference using noninformative […]

Empirical implications of Empirical Implications of Theoretical Models

Robert Bloomfield writes: Most of the people in my field (accounting, which is basically applied economics and finance, leavened with psychology and organizational behavior) use ‘positive research methods’, which are typically described as coming to the data with a predefined theory, and using hypothesis testing to accept or reject the theory’s predictions. But a substantial […]

That odd couple, “subjectivity” and “rationality”

Nowadays “Bayesian” is often taken to be a synonym for rationality, and I can see how this can irritate thoughtful philosophers and statisticians alike: To start with, lots of rational thinking—even lots of rational statistical inference—does not occur within the Bayesian formalism. And, to look at it from the other direction, lots of self-proclaimed Bayesian […]

An unexpected benefit of Arrow’s other theorem

In my remarks on Arrow’s theorem (the weak form of Arrow’s Theorem is that any result can be published no more than five times. The strong form is that every result will be published five times), I meant no criticism of Bruno Frey, the author of the articles in question: I agree that it can […]