He wants to get started on Bayes

Mathew Mercuri writes:

I am interested in learning how to work in a Bayesian world. I have training in a frequentist approach, specifically from an applied health scientist/epidemiologist approach. However, while i teach courses in applied statistics, I am not particularly savvy with heavy statistical mathematics, so I am a bit worried bout how to enter into this topic.

Can you recommend a book on Bayesian statistics for a frequentist trying to make the conversion! I am interested in your book, but I am not sure if it is the correct entry point. Note: I have read Howson and Urbach’s book on bayesian reasoning, so I am not a complete novice.

My reply: I recommend Richard McElreath’s book as a start.

28 thoughts on “He wants to get started on Bayes

  1. I love McElreath’s book, but I would also really recommend starting with Kruschke’s Doing Bayesian Data Analysis (get the second edition).

    Kruschke makes it really easy for a self-study learner to work through all the examples and homework exercises, and we all know that’s where the real learning happens. I’d dabbled with Bayes for years but felt that there was a conceptual block that was keeping me from really understanding how to use the techniques practically. Then I spent about one week where I dropped most everything else and worked my way through the first ten chapters or so of Kruschke (about two chapters a day, working most or all of the exercises for each chapter) and that was the point where I felt that I really got what it was about and the other, more advanced books started really making sense to me.

    I’ve had a bunch of my graduate students use Kruschke as the entry point to move from classical frequentist stuff to Bayesian and they all agree that K’s book is really clear and terrific for self-study.

    • I just wish that Kruschke would move the code (which is really clunky) into an online supplement and out of the book. I still never managed to read it because the ebook version I bought keeps crashing my pdf viewer due to its size.

      • For a beginner I think it is very valuable to have the code nicely formatted directly in the main text. Especially when looking at a book with a focus on application.

        The problem here might be bad ebook conversion, which isn’t an isolated issue.

        It’s always painful to see an excellent textbook receive 1 star ratings on amazon because the Kindle edition sucks. In your case it’s an .pdf instead, but they can probably screw that up as well.

        • I would also strongly recommend the Kruschke book, which is what I used as a starter.

          I agree that there is merit in having the code directly in the main text (which is of course no excuse for providing an ebook that crashes).

          If you institution has a subscription, you can download it by chapter via sciencedirect: http://www.sciencedirect.com/science/book/9780124058880

          In this way, I never had problems reading this book electronically.

        • +1 for nicely formatted code directly in the main text.

          IMO, Kindle is not ready yet for technical books / pdfs. It ruins the typesetting or starts to crawl.

          OK, last I checked was more than a year ago, so perhaps things have changed now.

        • The Kindle file format can produce some very beautiful and readable e-books (see for instance “Molecular Biology of the Cell”). The problem is that the Kindle hardware doesn’t really handle those – an iPad or a full-fledged computer is necessary.

        • But then, what’s the point? If I want to use my computer then why not stick to a pdf?

          Are you saying reading the Kindle format on a PC is better than reading a pdf on a PC?

        • For a lot of technical books (including McElreath’s book and BDA3), what’s going on under the hood is that the Kindle file does not use the Kindle’s native mobi format, but is just a wrapper around a PDF, adding DRM to discourage copying. That’s why you can only read them on hardware, such as tablets and PC’s, with the computing and display power to render PDFs.

        • I misremembered. Somehow I had the impression that those files had some degree of paragraph readjustment when zoomed in (like the standard Kindle or HTML), but I’ve just checked and nope. No text adjustment, just PDF with another name. ebooks are really a stone age format.

      • Actually, my memory of the code seems to be faulty. I think in the first edition there were pages and pages of code, and that’s not there any more. There’s an odd use of = where one uses the <-, but that’s OK. Quite a few people told me the book is really good. I managed to get the pdf to not crash just now, using Skim instead of Preview, so it is a pdf viewer-side problem.

        If you know a bit of calculus, you should read Lynch. I thought that book was awesome (although it some minor errors in it, I think Christian Robert reported on that on his blog). The Lynch book is expensive though, Springer’s usual approach to overpricing books.

        • Indeed. Almost 50 cent per page is a bit hefty, especially for a book with the word “Introduction” in the title.

          I can sort of understand the motivation with something more nice than an introduction to Bayesian statistics, but not in this case.

  2. I really like McElreath’s book and am using to teach a grad Bayes course. However, it doesn’t give a lot of comparisons to the frequentist approach, so if you’ve been using frequentist it can be hard to know what the main differences are with each analysis (something I find helpful).

    That’s why I also like Korner-Nievergelt et al. 2015. Bayesian data analysis in ecology using linear models with R, BUGS, and Stan. It’s written for ecologists, but gives a side-by-side comparison of frequentist/Bayes, along witht the R code. After struggling through some other texts, this one turned on the lightbulb for me.

  3. Don’t mean to hijack the subject, but are these books also good for an introduction/expose on NHST and what we should do instead? If not, what can you recommend?

    The reason for my question is a comment on this blog a few days ago: “… the fundamental confusion about NHST and what it does or doesn’t say about the hypothesis of interest. A high percentage of empirical studies in economics employs the strategy of rejecting the null and simply concluding that this result is “consistent with” the hypothesis generated by some theoretical model.”

    Hey,thats exactly how I was taught! Theory/previous literature provides testable hypotheses (basically a one-sided one, but we always test is two-sided). We test against the null of no effect, and ifthe parameter is found significant, we conclude that as support for our hypothesis. Because if you’ve done your job in the literature review / hypothesis derivation well, you should’ve considered all sides of the story; you could only come up with this hypothesis; so significant rejection of the null indicates support for your idea… right?

    I guess many of you will say “wrong”, but which book really explains an alternative well? Ie, really hands on, how should I do my analysis and write this up in a paper? I found ‘Understanding the new statistics’ by Cumming. But it’s a big investment for me to buy just because I *think* this is the kind of book I’d need. Advice welcome, thanks!

    • Thanks. Maybe I should have mentioned that I don’t do experiments at all. I work with existing datasets, so it’s all multiple regression and such for me. Maybe the NHST critique only really works for the (social)psychology experiments Andrew often criticizes?

      • No NHST of a straw man hypothesis is bogus the same way that this is:

        “If you are poor then you can’t afford a sandwich, you are not poor, therefore you are as rich as Bill Gates”

        the fact that you are not poor only allows you logically to say “we don’t know whether or not you can afford a sandwich” (perhaps some not-poor people also can’t afford a sandwich, for example because they don’t have much cash even if they have a lot of assets. You certainly can’t get from “you are not poor” to “you are as rich as Bill Gates” which is the typical leap used in NHST on a daily basis by those taught like you were).

        • It has the same logical error, the conclusions are rarely as extreme, but I made the example extreme so that the error was more obvious.

          The logic of NHST straw-man is still “A implies B, from data about B we see probably not A and then assume that C must be true (our favorite alternative to A)”

          From a Bayesian perspective, we have something like:

          “Any of the models A,B,C,D,E,F are somewhat plausible, from the data I rule out A with high degree of certainty, and therefore am left with various plausibilities for B,C,D,E,F according to the math of Bayesian Inference”

          if F is “you’re as rich as bill gates” and it has extremely small prior plausibility, then after “probably not A” you’ll still have small posterior plausibility for F. Frequentist NHST can’t handle that prior info about Bill using the same technique.

        • I think the crux of the matter is this: Frequentist NHST can’t handle that prior info.

          Maybe I don’t have much of prior info. Or I’m not confident about my prior?

    • Thank you, I’ll read those! It may be helpful (nor not) to point out that I never do any experimental designs / ANOVA / t-test (often used in the kinds of papers Andrew often criticizes here), but work with existing/administrative datasets with many cases, doing multiple regression, multilevel modeling (not fully Bayesian), spatial regression, that kind of stuff. I.e., I don’t know if the beef with NHST only applies to the ‘get 40 students in the target group, get 40 students in the control group, do intervention, test for p < .05' design.

    • The Kruschke book mentioned by others doesn’t go into NHST in much detail, but there is a chapter specifically devoted to some of the issues and why he prefers Bayes.

Leave a Reply

Your email address will not be published. Required fields are marked *