He wants to know what to read and what software to learn, to increase his ability to think about quantitative methods in social science

A law student writes:

I aspire to become a quantitatively equipped/focused legal academic. Despite majoring in economics at college, I feel insufficiently confident in my statistical literacy. Given your publicly available work on learning basic statistical programming, I thought I would reach out to you and ask for advice on understanding modeling and causal inference ab initio. I would be very grateful for any recommendations for books, learning software and whatever else you may think is necessary/helpful.

To start with, I recommend my forthcoming book with Jennifer Hill and Aki Vehtari, Regression and Other Stories, also Richard McElreath’s Statistical Rethinking book as a start on thinking about Bayesian modeling. These books use R and Stan.

These articles might be helpful too:

Causality and statistical learning (from 2011)

Why ask why? Forward causal inference and reverse causal questions (from 2013)

Evidence on the deleterious impact of sustained use of polynomial regression on causal inference (from 2015)

The failure of null hypothesis significance testing when studying incremental changes, and what to do about it (from 2018)

Also, I like the book, A Quantitative Tour of the Social Sciences, edited by Jeronimo Cortina and myself and containing contributions by leading researchers in the fields of history, sociology, economics, political science, and psychology. This is not a statistics book; rather, it gives a sense of the different ways that people in these different fields think about quantitative research.

I’m mostly pointing out stuff written or edited by myself because (a) that’s what I’m most familiar with, and (b) I’m the on you asked! Maybe some commenters will have some suggestions of other good books to read and software to learn.

60 thoughts on “He wants to know what to read and what software to learn, to increase his ability to think about quantitative methods in social science

  1. Is ‘Regression and Other Stories’ due out soon?

    (Im a non expert like the emailer, so I’ll just note a few books that I found useful; Frank Harrell’s Regression modeling strategies book, David Freedman’s books, and David Cox’s books(principles of statistical inference and principles of applied statistics) in particular. Efron and Hastie’s Computer age statistical inference was quite interesting, but heavy going. Joshua Angrists books on econometrics, particularly ‘mastering metrics’ are easier to follow. Though iirc none of these will give you advice on learning basic statistical programming per se. I found the book mentioned ‘A Quantitative Tour of the Social Sciences’ interesting aswell.)

  2. 1. The Cult of Statistical Significance by Stephen Ziliak and Deidre McCloskey

    2. A World of Struggle: How Power, Law, and Expertise Shape the Global Economy, David Kennedy

    3. Empire of Chance Gerd Gigerenzer

    4. John Ioannidis, Sander Greenland, and Steven Goodman’s articles; Google and Youtube

    5. Andrew Gelman articles’ of course

    6. For and Against Method Paul Feyerebend

    7. Beyond Significance Testing Rex Kline

    I think it’s equally important to get a grasp of the big picture debates in statistics and in specific fields, like law, political science, and biomedical fields.

    • Some comments from someone who has been down this path:

      a. I must say I never understood Andrew’s articles until I had gone through formal study. Andrew’s general-audience writing often feels like he assumes you already know some things—which a general audience doesn’t. Now that I know what it’s about (and I’ve been following this blog for 11 years, that helps!), I find Andrew’s general-audience articles very comprehensible. I wouldn’t start with his articles, but rather would end my period of formal study with them.

      b. I would completely avoid The Cult of Statistical Significance. It’s an extended twitter-rant. It’s all fine, but all you need is 280 characters to express the point, not a whole book. I read it but it quickly became painful even though I was agreeing with the authors. This book should never have been written.

      The other stuff I don’t know much about or don’t have any comments on.

      c.

      • It was a while ago that I read The Cult of Statistical Significance, but I agree with Shravan that it’s a rant — it’s one of those things where you cringe because there are good points, but they are made in a way that could alienate a reader rather than convince them. As someone (perhaps my brother?) used to say, “With friends like that, who needs enemies?”

    • Ziliak and McCloskey overplay their hand significantly (and go off on a number of bizarre claims so typical from work by McCloskey). It’s not a book I’d recommend.

      • Your comment prompted me to check on Haack’s academic site https://miami.academia.edu/SusanHaack

        Now on my to read list

        Proof, Probability, and Statistics: The Problem of Delusive Exactness (2018)
        Haack opens her paper on “future directions” in scientific testimony with a quotation from Aristotle and an observation from a hundred-year old U.S. Supreme Court ruling! Her reason? Her argument will be that the hankering for exactness where no exactness is possible that Holmes identified as “a source of fallacy throughout the law” still impedes us; and, specifically, that it gets in the way of a clear understanding of the proper role of probabilistic and statistical evidence—such as testimony of random-match DNA probabilities in criminal cases, or epidemiological testimony of increased risk in toxic-tort cases—in legal proof of questions of fact. As a result, the law sometimes asks more of statistics than statistics can give, and sometimes gets less from statistics than statistics could give.

        “The Pragmatist” [Oliver Wendell Holmes Jr.] (2019)
        THE PRAGMATISM AND PREJUDICE OF OLIVER WENDELL HOLMES JR. , 2019
        Justice Holmes has been described as a great liberal, a legal realist avant la lettre, a precursor of the “law and economics” movement, a moral skeptic or nihilist, and even as a proto-Nazi and, by one exasperated critic, a “cynical… brutalitarian”; but all these descriptions are, in one way or another, unsatisfactory. Historian of philosophy Max Fisch was much more illuminating when he wrote in 1942 that it had “not yet been shown how close [Holmes] stood to pragmatism.” Showing just how thoroughly pragmatist Holmes’s thought was, however, requires more evidence and more argument than Fisch could provide in his short paper. My goal is to supply that further evidence and further argument. I begin by telling the story of the origin and evolution of the pragmatist tradition, and Holmes’s part in that story—which, as we’ll see, was very far from straightforward (§1). Next, I argue that what united the philosophers of this tradition was, not a body of doctrine, but a congeries of themes, attitudes, and predilections that, despite their great diversity and their many disagreements, they all shared (§2). Then I show that the shared themes and attitudes found in Peirce, James, and Dewey can also be seen at work, in a distinctively legal form, in Holmes (§3); and finally that, once we recognize the pragmatist character of Holmes’s multi-faceted legal thinking, it’s easier to see what’s true but misleading, what’s half-true, and what’s just plain false in that litany of discrepant, one-dimensional classifications with which I began (§4).

        • Keith,

          Thank you for referring us to Susan Haack’s website. Fortunately, I have watched some of the talks on YouTube. I will definitely read those two articles in particular. L lean to that hypothesis in the 1st article. It’s one that Serge Lang stated and implied as well in the case of Samuel Huntington’s bid for National Academy of Sciences membership. Serge got too vituperative as the debate ensued, undermining the strength of Serge’s positions.

          The broader question I have is: What is being meant by ‘statistical literacy’: in the biomedical and medical domains. In reading Gerd Gigerenzer’s Risk Savvy, I have inferred that consumers/patients, prospective trial participants must become even more statistically savvy before they making decisions. There is the practice of engendering even more technical arguments that exclude those most in need of guidance in statistics.

          The distinctions raised in his book are not in the purview of much medical education. I would be interested in how John Ioannidis, Steven Goodman, Andrew, Sander Greenland, and others respond to the lack of statistical literacy in the medical profession specifically.

  3. As a law student wanting to be an academic (I teach in a law school) — you should get familiar with what the legal academics think is important.

    One thing that happens every yesr is this annual summer session (you just missed it!) on causal inference:
    http://www.law.northwestern.edu/research-faculty/conferences/causalinference/frequentist/

    Another thing you should do is attend the fall Conference on Empirical Legal Studies.

    (And learn some stats.)
    Good luck !

  4. I highly recommend “The Algebra of Probable Inference” by Richard T. Cox. It’s an absolute gem and can be worked all the way through over the the course of a plane ride. I came away from it (a) understanding at least on a basic level what could be done with statistics; and, (b) realizing that most of the experts spouting statistics whom I cross examine had never gotten to (a).

    P.S. As you likely already know legal professionals can’t open their office door without finding at least one ML/AI huckster waiting outside. Recently I sat through a presentation by yet another vendor selling Bem’s precognition. Supposedly, with their “algo” they can tell us how our judge will rule on any hypothetical motion we might file. They’ve also text-mined the briefs and subsequent opinions of the appellate courts and would be happy to sell us the key words, phrases and cases that’ll make the judges love our arguments and hate the other side’s. Instead of selling “how to think better” they’re selling “how to avoid too much thinking”. And yet, in-house clients also pestered by these people are beginning to ask us: “have you had someone model how our judge would rule on it?” Gah.

    I wish you the best as the law is in desperate need of people who can help kill all the statistical zombies besetting it.

    • Wow. Pretty depressing, but now that you’ve pointed it out, yes it shouldn’t be too surprising that this is happening. I guess now we have to figure out how to counter it or otherwise deal with it before it does too much damage.

  5. Thanatos,

    Thanks for that recommendation. Do you know of any book which presents statistical reasoning in legal cases? I know there are some articles in print. But no book that I know of.

    I meant to add the name of forthcoming book, due out in Oct 2018, by Amos Tversky [posthumous] Critical Thinking: Statistical Reasoning and Intuitive Judgment.

    • I’ve never come across such a book though perhaps this at page 211 might give you some insight: https://www.fjc.gov/sites/default/files/2015/SciMan3D01.pdf .

      The short sad story of statistics and the law is that back in the early 70s rising legal superstar Lawrence Tribe wrote a law review article warning those interested in justice to eschew the dark methods of the Bayesians encamped beyond the gates. The Bayesian legal scholars of the era were (to greatly oversimplify) arguing for something dumb like this:

      If plaintiff gets run over by a bus but can’t identify the bus we should hand his bill for lost wages and medical bills to whichever bus company has the most buses in town.

      Tribe’s reply (again to greatly oversimplify) was to state what is now the well-known argument against breast cancer and prostate cancer screening given the apparently (but not actually) robust sensitivity/specificity of current screening methods. The result would thus be lots of innocent people made to pay for things they hadn’t done. An instance of the prosecutor’s fallacy tainted by obvious racial animus that had been widely publicized and overturned by the California Supreme Court a year or two earlier (if memory serves) primed the courts to be leery of statistics and Tribe confirmed their suspicions with a well-written rebuke.

      Something else that’s important to understand is that when Oliver Wendell Holmes et al were stating the Law, their’s was a clockwork universe and C.S. Pierce wasn’t consulted about variability; whether of an object or its measurement. The result was that the word “risk” rarely appeared for 100 years as the courts were instead arguing about which billiard ball had knocked the 8-ball into the corner pocket and whether the cue ball was also to blame. They couldn’t figure out how something that made predictions about the future could be used to render judgments about the past.

      When Stats couldn’t be avoided the courts adopted a frequentist stance and proceeded to pound the square risk peg into the round causation hole by latching on to the idea that “p less than .05 means we can be 95% sure that whatever was ‘tested’ is true and that’s more than adequate for our purposes.” Further evidence of their thinking can be found in the peculiar distinction between what they call “general causation” and “specific causation”. The idea here is that a change in a group mean somehow maps to causation “generally” and that the individuals specifically affected can somehow be precisely identified (usually either by abduction – whereupon a great argument erupts over what things go in the set of possible causes and how they ought to be weighed, or by scrutinizing the relative risk and dichotomizing at 2.0 – making for a double dichotomy as the relative risk putatively discovered must in most jurisdictions be “statistically significant”).

      It’s because of the foregoing that I follow as closely as I can the Pearl/Dawid debate. Given that most courts just want to decide a case and move on to the next one my bet is that either DAGs won’t make an appearance for 100 years or the standard docket control order will require both sides’ experts to produce a DAG thereby reducing things to a ritual, much like the sacrifice of reason by the p-value knife.

      • Thanatos,

        Wow thanks it’s also over 1,000 pages. Very interesting. I will try leisurely amble through them over the next few months. I may have read Lawrence Tribe’s law review article. The thesis sounded familiar.

        It would be a fantastic for some enterprising lawyer or legal scholar to undertake a review of cases using statistical reasoning.

        In reading the introduction, I wonder whether it is ascertainable that judges/lawyers are better able to grasp the logic or illogic of statistical reasoning.

        • So I went to Google Scholar and looked for the most recent pronouncement on p-values by a federal appellate court interpreting the reference manual on scientific evidence (RMSE) and found the following (I’ll let you decide whether they’ve a better grasp of things because of it):

          “We pause here to provide a brief overview of the concept of statistical significance and its proper role in the courtroom. Statistical significance is a measure of confidence that a trend observed in a dataset is not random. “A study that is statistically significant has results that are unlikely to be the result of random error . . . .” RMSE at 573. Statistical significance is typically expressed through a p-value. “A p-value represents the probability that an observed positive association could result from random error even if no association were in fact present.” Id. at 576 (emphasis removed). To determine whether an association is statistically significant, statisticians compare the p-value to a predetermined threshold value (also known as a significance level). If the p-value is smaller than the significance level, then the finding is statistically significant. Otherwise, it is not. “The most common significance level . . . used in science is .05. A .05 value means that the probability is 5% of observing an association at least as large as that found in the study when in truth there is no association.” Id. at 577 (footnote omitted).”

          The opinion also features a discussion of Fisher’s Exact Test, back of the envelope meta-analyses done by non-statisticians and a discussion of why the so-called A. Bradford Hill causal criteria are applied to evidence rather than used to create evidence. The court reached the correct conclusion (if you agree that courtrooms aren’t the best place for discovering the cause of diabetes and other diseases) but all the p-talk, discussions of randomness and whether 0.05 ought or ought not be the boundary line drawn between True and False is depressing. On the other hand, the fact that experts can be found who will on whatever Stats package they use deploy one test after another until they get p less than the magic number AND who then forget to shred the contents of their file drawer so that the p greater than magic number folders are later discovered did make me smile. Here’s a link to the opinion: https://scholar.google.com/scholar_case?case=10965502063415301532

        • Thanatos,

          Thanks again. Let me say this. If I were to have read that paragraph you quoted, without some exposure to the controversies in statistics, I would be clueless as to how that would be helpful. Then too, plaintiff and defendant lawyers are in the business of convincing the judge or jury of its side: sometimes using very strategic and skilled rhetoric, as you can attest.

          As I mentioned earlier, a book length expose of statistical reasoning in courtroom would be fascinating. It may require a collaborative effort. But someone of the intellectual caliber of a Richard Posner may be able to undertake such an endeavor. Judge Posner is on to other things. But I’m sure he is still interested in this topic. Lawrence Tribe might be another. Either one can recommend one of their former students perhaps.

          Perhaps you can undertake it as you have the legal and statistical background.

          I egged Sander Greenland many months ago. I don’t recall how he answered as it was only a brief mention.

        • Reads almost like a soap opera for statisticians –

          “The district court again took issue with Dr. [Nicholas] Jewell’s methodology. First, the court expressed concern with Dr. Jewell’s decision to replace the definition of diabetes used by the ASCOT endpoints committee with one of his own. Although Dr. Jewell is well-qualified as a statistician, he’s not a medical doctor or professional, nor does he have any particular expertise in diabetes. The court decided that Dr. Jewell lacked the expertise to “second guess” the judgments of the endpoints committee, and that it was inappropriate for “someone with no clinical expertise [to choose] to replace the adjudication committee’s determination of new-onset diabetes with particular unadjudicated raw data, namely lab values of his choice.”

          The court was not at all impressed with that statistical expert witness on a number of points – for what seems like very good reasons.

          By the way the mid-p value is a quintessential example of a bright line fetish. For example, if because of discrete outcomes under the null hypothesis, the most extreme might happen .01 and the second most .08 – so the only possible p values are .01 or .01 + .08. The mid-p value is equivalent to flipping an unbiased coin and declaring significant if heads or not if tails.

          Under the null that would happen just .05. Now its not actually a coin flip but (something like) the ordering of the outcomes which is just noise under the assumed model. Not clear if the court understood that part in deciding it was acceptable.

      • > Oliver Wendell Holmes et al were stating the Law, their’s was a clockwork universe and C.S. Pierce wasn’t consulted about variability;
        Is there a reference on this?

        An argument I once heard was too much inertia – roughly (most) law students are unlikely to see statistics as very important as current judges haven’t a clue and so statistical arguments will not be taken seriously. That used to be the case in medicine in 1970s and started to change in the 1980s – clinicians that could implement and understand randomized trials and cost effect analyses started to be more and more promoted above others. Is this likely to be true in law?

        • No reference beyond my own (hardly exhaustive) study of the matter. Other than the Hand (Learned) Rule there’s very little discussion of the implications of uncertainty in the law. I speculate that we, the people, desire that justice be done in such a manner that we believe it has indeed been done, and that the courts provide, whether or no.

          As for when things will change, it’ll take fewer judges with degrees in political science and more with degrees in the beautiful game (math) to make a difference.

        • I’ll try to check (book not here) but remember Cheryl Misak in The American Pragmatists (OUP 2013) pointing out Oliver Wendell Holmes (in the chapter on him) thought that one should never be certain but rather one only could try to make good bets.

          So he may have had a good grasp on the uncertainties but could not pass that on.

        • > define “the law” as a prediction of what will bring punishment or other consequences from a court
          Definitely from https://en.wikipedia.org/wiki/The_Metaphysical_Club (the pragmatic grade of a concept).

          But I think that does not rule an understanding of uncertainty but rather supports it.

          On the other hand, I have not re-read the chapter on how Holmes fit in with American pragmatism.

          > nobody thought much about priors back then
          That’s a current interest of mine – nobody thought much about priors back then because as there was nothing to think about. All they tried to represent was complete doubt about true parameter values.

          Stephen Stigler has argued that F. Galton was the first to think of priors (1880’s) that represent knowledge rather than ignorance – but that it had very little influence on anyone.

          I have been looking for documentation that Peirce was or was not aware of Galton’s priors – but no success yet. On the other hand, I have yet to discern arguments against priors that purposefully represent knowledge in his wider philosophy.

    • This book is phenomenal, up there with BDA3 and Statistical Rethinking and The BUGS book. But it’s not an after five kind of book, it needs hard work. Also, I believe Joe B provides videos; he’s very funny to watch. Seems like a great teacher.

  6. if i was starting from scratch,

    morgan and winship, counterfactuals and causal inference
    long and freese’s stata book
    maybe one of rosenbaum’s observational studies books

    (i know some/all of these are not going to be popular here)

  7. This kind of approach (take a bunch of popular-science books and read them) never worked for me. I read a lot on statistics *as a non-statistician*. If I read semi-technical books like Gelman and Hill, I felt mystified by whole sections where it felt like I had missed a crucial part of the conversation. If I read fluffy books about how statistics is misused (the most horrible book I have ever read in that genre is The Cult of Statistical Significance), I was suitably entertained but not equipped to actually do anything useful. It’s statistical tourism.

    I think reading books on modeling can work if you already have serious technical preparation, but I assume this person does not. I know a few people like this; they would just pick up BDA and say uh huh, uh huh, and then implement something in C or C++ for a real problem they are trying to solve. If you are not at that level, all you will ever be is a statistical tourist.

    If you are serious about it, go through formal study. There is nothing like writing deriving an answer to a problem and finding you that you are dead wrong.

    • I agree with your point on BDA. When I read it for the first time, I had done some grid estimation of posterior densities, i.e. really elementary stuff and all the advanced stuff was like magic to me, and I had similar experience to yours. After the gentle introduction, I really wasn’t able to appreciate most of the stuff in the book.

      The opus which finally cracked the skulls of many Bayesian/statistical concepts for me was Krushcke’s almost similarly named book, Doing Bayesian Data Analysis. The discussion is most of the time really concrete, which might be frustrating for those who are more mathematically inclined, but for me this was the aspect that helped me to understand what was being said. After this I’ve gone back to BDA and have gotten much much more out of it.

      Sameera recommended Kline’s Beyond Significance Testing. The first part of the book is fairly illuminating discussion on p-values, misinterpretations of them et cetera, the usual good stuff, but to me as a whole the book reads more like a reference book: I’ve mostly gone back to it to check how a certain confidence interval or whatever should be calculated. There is not much–if I remember correctly–about the practical implementation of the equations and concepts introduced in the book. Of course it is not that hard to implement the equs, even the most difficult are fairly easy to implement in R, and solve them that way, but if the starting point is that one does not know how to do that sort of thing…

      Not to say that it’d be a bad recommendation; that is just my own experience with the book.

      Last, but not least, some Youtube stuff, since we live during the era of the Youtube:

      “Some Bayesian modeling techniques in Stan” is something I found to be illuminating about linear models, how to program them in Stan etc., and I think the major points should be comprehensible to pretty much anyone:

      https://www.youtube.com/watch?v=uSjsJg8fcwY

      Also (did someone else already mention this? Well, why not mention it again): Richard McElreath’s course “Statistical rethinking” is available on Youtube. Really easily followable stuff, for pretty much anyone.

      • Zlyglöbylos,

        I agree with your assessment of Rex Kline’s book. The 1st part was most helpful. Probably the most clear exposition of misinterpretations. The rest required a very good foundation in basic statistics. Often the examples provided, the graphics, are so boring and sometimes inappropriately displayed.

        But in reviewing a couple of other books, the same or similar conclusion can be drawn. Some parts very good; some parts not so good.

        I think there needs to be systematic examination of all these basic statistics texts. I believe I mentioned this to John Ioannidis recently. More specifically I noted that a history of statistics reasoning in high school and university undergranduate programs would be beneficial. That is to say that a full semester would be the minimum.

    • Shravan,

      Obviously different people get different takeaways from a book tool. I liked Cult of Statistical Significance very much. You call in ‘statistical tourism’, but the thesis has a long history, See for example Significance Test Controversy by Ramon Henkel. Anyway tourism can be very informative and helpful. After all we are evaluating whether ‘statistics’ equipped to add anything useful. It’s a journey.

      • Sameera, I agree with your general point. I also read these kinds of books. I just meant that they won’t make one actually able to do anything useful, at least that was my experience. One needs to dive into the technical details and get one’s hands dirty for that.

        • Shravan,

          ‘Useful’ has many many connotations

          http://www.thesaurus.com/browse/useful. Which one/s do you mean to convey?

          However, nearly everyone here on the blog has been pointing to practical & theoretical problems in specific contexts: based on their own vocational/practical experiences. They have been getting ‘their hands dirty’.

          If the objective is to redo, reinvent, or circumvent those problematic results then it leaves open the sense in which you mean ‘useful. Perhaps you can point to one or two examples of useful efforts.

          If you mean that it is necessary to grasp technical knowledge; that is a given. Nevertheless there are aspects to any of the problems raised that are within the grasp of those who may not have extensive technical knowledge. Obviously some fields are highly technical to begin with. It requires putting in the time to learn them. It’s that most entail logical reasoning, which is important to detect as well. Everyone has different competencies [subject matter & logical] in different degrees. There is no assurance that subject matter expertise is necessarily going to yield critically thinking effort. That is why several of the books recommended may be useful.

          This discussion reminds of theme that has cropped up now and then: The generalist vs. subject matter expert. And even more fundamentally back to how to improve judgment; qualitatively and quantitatively.

          Lastly we have been apprised of some of these problems when Sander debated Carlos here. I’ll have to find the link.

        • Hi Sameera, with “useful” I meant it in a specific sense: can I take something from this book and use it to improve my day-to-day research output? What the formal study of statistics gave me would never have been possible through reading of general-audience books. Sure, I could get a brief look behind the curtain and see all the riches there, but I couldn’t go in and do something with them.

          The post you link to is about a paper that is sort of the end-result of that whole process, which lasted about 10 long years. I gained a lot from the comments on the blog, and in fact, in the final version of the paper (not public yet) many of the blog-commentators are acknowledged. But the useful comments were from professional statisticians; it’s no surprise I learnt something new from some of the 240+ comments! I wasn’t talking about that. This is the Gelman blog; some 4000+ readers IIRC.

          I was talking about fun popular-science books that talk about statistics. I was saying that if this person wants proper training, those books are good to read after five, but for the real stuff you have to go through the formal education process. Maybe coursera-type stuff is enough for some; it wasn’t for me back in 2011 when I started. By getting your hands dirty I mean fitting Stan/JAGS/WinBUGS models, acquiring hands-on experience, struggling with things beyond one’s technical ability.

          To even understand Casella and Berger, for example, one needs some preparation, but one gets much more out of it. In one brief section they introduce the probability integral transform, and it is shocking when one realizes what it means for NHST. They never mention it (I think); you have to connect the dots yourself and that realization sinks in deep because it was your own insight. No amount of spelling out in pop-sci books that the p-value distribution is uniform under the null will sink in till you see the proof yourself and work out its implications yourself. My problem was that I just didn’t have any technical preparation for all this; my undergraduate major was Japanese! I suspect this person has the same starting point. It’s gonna be a long journey if he/she really wants to get somewhere.

        • Shravan,

          Thank you for your thoughtful response. There is very little with which I disagree.

          I don’t think any of the recommendations posted here on the blog fall really in the ‘fun popular-science category’. I wasn’t sure if that is what you meant.

          You did not appreciate the Cult of Statistical Significance. That’s your prerogative. Maybe it didn’t address your vocational interests.

          I found the thesis of the book applicable to the debt/debt ceiling debate that was being conducted in the mid 2000s. The way I would put it is that I had a feel for the topic of ‘debt/debt ceiling b/c I was also following the work of a former Securities & Exchange Commissioner, Roderick Hills; later to head the Hills Governance Program at Center for Strategic & International Studies. How to measure the debt was a question that did surface often.

          My emphasis is a little different. I stated it earlier. There is no guarantee that subject matter expertise elicits a critically questioning attitude. You may be in the minority. The claim is that thousands have accepted the standard statistics curricula without little or any exposure to the ongoing controversies shadowing the field.

          The student in all probability has either taken a basic statistics course or will. His specific query is about modeling and causal inference, which would require basic statistics course. I just think that the books I recommended were essential for me to contextualize the ongoing queries in statistics.

        • Shravan:

          If people were more open and talked about, I think we would find you are the majority.

          Wish I had kept a log, after doing an MBA which included being part of the Phd Marketing seminar in the second year, then four months as a research assistant programming an automated simplified hierarchical modelling analysis of OECD industry data for multiple countries (e.g. US, Canada and Japan) – well at least panel plots of all industries multi-year smooth trajectories within each country, also having been a student member of the Toronto Semiotic Circle and having attend their summer research program (e.g. drinking beer with Umberto Eco, JG Gardin, Rene Thom and others) – I entered a Biostatistics MSc in 1984.

          Now in 1986 was able to develope methods for meta-analysis – L’Abbe KA, Detsky AS, O’ROURKE K. Meta analysis in clinical research. – which I think is still is reasonable today. But then until 1988, I summarized and ran everything by David F Andrews. Maybe by 1990 I was a stand alone competent statistical for enabling clinical research but would stuff stuff by David if he had the time.

          My continuing deficiencies became more aware to me in preparing ASA continuing Ed courses on Meta-analysis that I gave at JSM in 1998 and 1999. I did not get caught there but Brian Ripley did catch me later – Meta-Analysis and the Quantitative Investigation of Replication. S-PLUS User’s Conference, 1999 that involved my incomplete understanding of residual likelihood. (Hey I think that was my best title ever?)

          My coming of real competency in my own mind was after 2 years at Oxford, back in Ottawa in 2004 being able and very comfortable implementing Yudi Pawitan’s likelihood based 1st order asymptotic analysis of the one way random effect model for use with real data.

        • Keith,

          Is it possible that some physicians, epidemiologists, statisticians go into the burgeoning 80’s Evidence Based Medicine EBM movement and circles. Even I came across the work of David Eddy, Archibald Cochrane, David Sackett, and Alvin Feinstein.

          I am suggesting that your generation may have been in the majority in being more critical of mainstream research theories and practices. You point to David F Andrews, who I haven’t come across. Nevertheless the years 60s’ thru late 90s’ stand out for the number of critical articles and books about clinical epidemiology/trials.

          https://en.wikipedia.org/wiki/Evidence-based_medicine

          John Ioannidis seems to suggest that the EBM movement has been hijacked.

          https://www.ncbi.nlm.nih.gov/pubmed/26934549

        • Keith,

          Darn my last post got lost. Anyway wouldn’t you say that your generation was influenced by the Evidence Based Medicine Movement that burgeoned in the 80’s, with the work of David Sackett, Archi Cochrane, David Eddy, and others.

          https://en.wikipedia.org/wiki/Evidence-based_medicine

          See also John Ioannidis’ take on the trajectory of the movement.

          https://www.ncbi.nlm.nih.gov/pubmed/26934549

          And of course John has since assailed the use of p-values in biomedical research.

        • Thanks for sharing these details. It’s very helpful to be reminded that it takes a long, long time to get to become functional at that level of mastery.

  8. It is hard to know what to recommend without knowing where someone is, sophistication-wise. The recommendations above likely err on the side of sophistication. If so, my recommendations are a corrective.

    I think that for exploring data, starting with R is a terrible idea, unless one already has lots of experience with point-and-click programs. I’d recommend JMP as a cheap and good and painless program:

    https://onthehub.com/download/software-discounts/jmp

    If you’re ready to learn R, then by all means do. But if you’re trying to understand statistics, then learning the syntax of R will make things harder (multiplicatively, and not additively, I think).

    (Get a data set you’re interested in. If you don’t have one, maybe start with: http://gss.norc.org)

    Judea Pearl’s recent book “The Book of Why” is a terrific guide to causation.

    • “It is hard to know what to recommend without knowing where someone is, sophistication-wise.”

      +1

      I learned statistics by teaching it, coming from a background in pure mathematics. So I’m probably not in a position to give good advice to people coming from a very different background. others. But I will second Michael’s comments on R vs JMP, since I had virtually no experience coding when I got into statistics.

      Also, apps, demos etc. that illustrate the concepts can be very helpful in learning (rather than mis-learning) the ideas. I recall that some of the “Arc” demos that came with Cook and Weisberg’s Regression Including Regression and Graphics were very good for seeing what was really going on. (I don’t know if they’re still available.) More recently, some of the shiny apps by Agresti et al (http://www.artofstat.com/webapps.html) seem quite good for illustrating some concepts and cautions.

    • As a counter-example, I’ll have to share my own experience; my point is to balance to long-held belief that R (and by association other high-level programming languages) are hard to learn: that it’ll take immense amounts of motivation and work to begin to understand them and so on.

      When I was doing my bachelor’s thesis, I had to learn–from scratch and without any tutors–how to use Matlab for generating experimental stimuli etc. and how to use R to statistically analyze the data. The thing that held me back from even beginning to get into Matlab and R was all the blogs and such which prefaced their tutorials by warning the reader about the steepness of the learning curve, how there are going to be a plethora of mistakes and everything’s going to be hellish.

      From the viewpoint of someone who started to learn those languages from scratch, that wasn’t the experience. Of course, looking back ẗo the scripts I wrote, I now realize they were horrendous. But they did what they had to do; I was able to accomplish my goals. I think it is much much much easier to get “something that works” working than people really make it out to be. This has also been my experience when tutoring others in how to use R. People with no formal training in statistics or programming have been fast to catch the basics, and have been able to conduct their own analyses independently.

      Also, I don’t think GUI-based programs are necessarily that much easier to use. I’ve never used JMP, but SPSS was part of the statistical training in my university, and I hope I never have to do anything with it ever again. I think manipulating data, matrices etc. is much easier in R.

      And of course as an advocate for free/open-source/etc type of software I’m not amused by someone recommending a commercial software over free, but I do admit that this is an annoying attitude, sorry for that.

  9. For causal inference, reading about the “potential outcomes framework,” also known as the “Rubin causal model,” or any combination of those words, is necessary. I’ve run into so many papers that just mention offhand that they use Rubin’s model or the potential outcomes framework—without really discussing it much. The Wikipedia page describes it well enough and provides references: https://en.wikipedia.org/wiki/Rubin_causal_model

  10. Also, I would not recommend Richard McElreath’s book if the person is not already well acquainted with statistics. The book is absolutely great for people being introduced to Bayesian thinking, however, I don’t think it’s an introduction to statistics in general. If there’s a desire to learn statistics and simultaneously learn R, might be a good idea to read some of Andy Field’s (to learn stats and get a nice introduction to R) books and Hadley Wickham’s books (to become more acquainted with R).

  11. By the way, I was wrong. It was Miguel Hernan (and not Dawid) who first challenged Pearl to a gunfight at the Twitter corral. The debate is about Simpson’s Paradox. Is it rooted in causation or statistics? I’d like to hear (read) Andrew’s take.

    For me, once I finally figured out how to make graphics that held as many pixels as Dave Justice and Derek Jeter had at-bats I could see, or so I imagined, what was going on. It seemed to me a statistical apples and oranges problem; but again, I’m just standing on my mole-hill trying to see a bit further; being from time to time vaguely annoyed by you lucky peeps who’re standing on taller shoulders.

Leave a Reply to Zad Chow Cancel reply

Your email address will not be published. Required fields are marked *