Maybe I don’t have much of prior info. Or I’m not confident about my prior?

]]>The logic of NHST straw-man is still “A implies B, from data about B we see probably not A and then assume that C must be true (our favorite alternative to A)”

From a Bayesian perspective, we have something like:

“Any of the models A,B,C,D,E,F are somewhat plausible, from the data I rule out A with high degree of certainty, and therefore am left with various plausibilities for B,C,D,E,F according to the math of Bayesian Inference”

if F is “you’re as rich as bill gates” and it has extremely small prior plausibility, then after “probably not A” you’ll still have small posterior plausibility for F. Frequentist NHST can’t handle that prior info about Bill using the same technique.

]]>“If you are poor then you can’t afford a sandwich, you are not poor, therefore you are as rich as Bill Gates”

the fact that you are not poor only allows you logically to say “we don’t know whether or not you can afford a sandwich” (perhaps some not-poor people also can’t afford a sandwich, for example because they don’t have much cash even if they have a lot of assets. You certainly can’t get from “you are not poor” to “you are as rich as Bill Gates” which is the typical leap used in NHST on a daily basis by those taught like you were).

]]>The article by Andrew in that issue is here http://www.stat.columbia.edu/~gelman/research/published/bayes_management.pdf but I would suggest your read Gigerenzer’s first.

]]>The reason for my question is a comment on this blog a few days ago: “… the fundamental confusion about NHST and what it does or doesn’t say about the hypothesis of interest. A high percentage of empirical studies in economics employs the strategy of rejecting the null and simply concluding that this result is “consistent with” the hypothesis generated by some theoretical model.”

Hey,thats exactly how I was taught! Theory/previous literature provides testable hypotheses (basically a one-sided one, but we always test is two-sided). We test against the null of no effect, and ifthe parameter is found significant, we conclude that as support for our hypothesis. Because if you’ve done your job in the literature review / hypothesis derivation well, you should’ve considered all sides of the story; you could only come up with this hypothesis; so significant rejection of the null indicates support for your idea… right?

I guess many of you will say “wrong”, but which book really explains an alternative well? Ie, really hands on, how should I do my analysis and write this up in a paper? I found ‘Understanding the new statistics’ by Cumming. But it’s a big investment for me to buy just because I *think* this is the kind of book I’d need. Advice welcome, thanks!

]]>Are you saying reading the Kindle format on a PC is better than reading a pdf on a PC?

]]>That’s why I also like Korner-Nievergelt et al. 2015. Bayesian data analysis in ecology using linear models with R, BUGS, and Stan. It’s written for ecologists, but gives a side-by-side comparison of frequentist/Bayes, along witht the R code. After struggling through some other texts, this one turned on the lightbulb for me.

]]>I can sort of understand the motivation with something more nice than an introduction to Bayesian statistics, but not in this case.

]]>If you know a bit of calculus, you should read Lynch. I thought that book was awesome (although it some minor errors in it, I think Christian Robert reported on that on his blog). The Lynch book is expensive though, Springer’s usual approach to overpricing books.

]]>IMO, Kindle is not ready yet for technical books / pdfs. It ruins the typesetting or starts to crawl.

OK, last I checked was more than a year ago, so perhaps things have changed now.

]]>Sheer size shouldn’t crash a viewer unless the viewer is buggy. What size is the pdf anyways.

]]>I agree that there is merit in having the code directly in the main text (which is of course no excuse for providing an ebook that crashes).

If you institution has a subscription, you can download it by chapter via sciencedirect: http://www.sciencedirect.com/science/book/9780124058880

In this way, I never had problems reading this book electronically.

]]>The problem here might be bad ebook conversion, which isn’t an isolated issue.

It’s always painful to see an excellent textbook receive 1 star ratings on amazon because the Kindle edition sucks. In your case it’s an .pdf instead, but they can probably screw that up as well.

]]>Kruschke makes it really easy for a self-study learner to work through all the examples and homework exercises, and we all know that’s where the real learning happens. I’d dabbled with Bayes for years but felt that there was a conceptual block that was keeping me from really understanding how to use the techniques practically. Then I spent about one week where I dropped most everything else and worked my way through the first ten chapters or so of Kruschke (about two chapters a day, working most or all of the exercises for each chapter) and that was the point where I felt that I really got what it was about and the other, more advanced books started really making sense to me.

I’ve had a bunch of my graduate students use Kruschke as the entry point to move from classical frequentist stuff to Bayesian and they all agree that K’s book is really clear and terrific for self-study.

]]>