Occam
Cosma Shalizi and Larry Wasserman discuss some papers from a conference on Ockham’s Razor. I don’t have anything new to add on this so let me link to past blog entries on the topic and repost the following from 2004: … Continue reading
Cosma Shalizi and Larry Wasserman discuss some papers from a conference on Ockham’s Razor. I don’t have anything new to add on this so let me link to past blog entries on the topic and repost the following from 2004: … Continue reading
In my comments on David MacKay’s 2003 book on Bayesian inference, I wrote that I hate all the Occam-factor stuff that MacKay talks about, and I linked to this quote from Radford Neal: Sometimes a simple model will outperform a … Continue reading
Regarding my anti-Occam stance (“I don’t count ‘Occam’s Razor,’ or ‘Ockham’s Razor,’ or whatever, as a justification. You gotta do better than digging up a 700-year-old quote.”), David Gillman writes: I was at your talk at MIT yesterday, and something … Continue reading
This post is by Yuling, not Andrew. We have been talking about how Bayesian inference can be flawed. Particularly, we have argued that discrete model comparison and model averaging using marginal likelihood can often go wrong, unless you have a … Continue reading
Gur Huberman points us to this news article by Nicholas Bakalar, “Vigorous Exercise Tied to Macular Degeneration in Men,” which begins: A new study suggests that vigorous physical activity may increase the risk for vision loss, a finding that has … Continue reading
tl;dr If you have bad models, bad priors or bad inference choose the simplest possible model. If you have good models, good priors, good inference, use the most elaborate model for predictions. To make interpretation easier you may use a … Continue reading
Under the heading, “please blog about this,” Shravan Vasishth writes: This book by a theoretical physicist [Sabine Hossenfelder] is awesome. The book trailer is here. Some quotes from her blog: “theorists in the foundations of physics have been spectacularly unsuccessful … Continue reading
OK, we’ve been seeing this a lot recently. A psychology study gets published, with a key idea that at first seems wacky but, upon closer reflection, could very well be true! Examples: – That “dentist named Dennis” paper suggesting that … Continue reading
Glenn Chisholm writes: As a frequent visitor of your blog (a bit of a long time listener first time caller comment I know) I saw this particular controversy: Summary: https://drive.google.com/file/d/0B6mLpCEIGEYGYl9RZWFRcmpsZk0/view?pref=2&pli=1 Very superficial analysis: https://docs.google.com/document/d/1SdmBLFW9gISaqOyyz_fATgaFupI2-n6vWx80XRGUVBo/edit?pref=2&pli=1 and was interested if I could … Continue reading
I learned from this comment that David MacKay has passed away. Here’s an obituary, which has a lot of information, really much more than I could give because I only met MacKay a couple of times. The first time was … Continue reading
The winner of yesterday‘s bout is Thoreau. The best pro-Thoreau argument came from JRC: “This one breaks down to to whose narrative on loneliness and solitude is more interesting: the guy who removed himself from society, or the guy forcibly … Continue reading
If I made a separate post for each interesting blog discussion, we’d get overwhelmed. That’s why I often leave detailed responses in the comments section, even though I’m pretty sure that most readers don’t look in the comments at all. … Continue reading
Michael Landy writes: I’m in Psych and Center for Neural Science and I’m teaching a doctoral course this term in methods in psychophysics (never mind the details) at the tail end of which I’m planning on at least 2 lectures … Continue reading
I’ve recently been reading David MacKay’s 2003 book, Information Theory, Inference, and Learning Algorithms. It’s great background for my Bayesian computation class because he has lots of pictures and detailed discussions of the algorithms. (Regular readers of this blog will … Continue reading
It’s been a dramatic month: A month ago, a coalition of some of the leading teams qualifies for the $1 million grand prize for improving the accuracy of the movie-recommending model by more than 10%. But, they would close the … Continue reading
There’s some cool and (possibly) important stuff in Yue Cui’s dissertation summary (under the supervision of Jim Hodges and Brad Carlin at University of Minnesota biostat). The short story is that, for reasons of substantive modeling as well as prediction, … Continue reading
The comments to a recent entry on “what is a Bayesian” moved toward a discussion of parsimony in modeling (also noted here). I’d like to comment on something that Dan Navarro wrote. First I’ll repeat Dan’s comments, then give my reactions. Continue reading
Aleks pointed us to an interesting article on the foundations of statistical inference by Walter Kirchherr, Ming Li, and Paul Vitanyi from 1997. It’s an entertaining article in which they discuss the strategy of putting a prior distribution on all … Continue reading
A lot has been written in statistics about “parsimony”—that is, the desire to explain phenomena using fewer parameters–but I’ve never seen any good general justification for parsimony. (I don’t count “Occam’s Razor,” or “Ockham’s Razor,” or whatever, as a justification. … Continue reading