I do think it was pretty ridiculous for people to call Samuel Huntington a scientist.

]]>Thank you for referring us to Susan Haack’s website. Fortunately, I have watched some of the talks on YouTube. I will definitely read those two articles in particular. L lean to that hypothesis in the 1st article. It’s one that Serge Lang stated and implied as well in the case of Samuel Huntington’s bid for National Academy of Sciences membership. Serge got too vituperative as the debate ensued, undermining the strength of Serge’s positions.

The broader question I have is: What is being meant by ‘statistical literacy’: in the biomedical and medical domains. In reading Gerd Gigerenzer’s Risk Savvy, I have inferred that consumers/patients, prospective trial participants must become even more statistically savvy before they making decisions. There is the practice of engendering even more technical arguments that exclude those most in need of guidance in statistics.

The distinctions raised in his book are not in the purview of much medical education. I would be interested in how John Ioannidis, Steven Goodman, Andrew, Sander Greenland, and others respond to the lack of statistical literacy in the medical profession specifically.

]]>Now on my to read list

Proof, Probability, and Statistics: The Problem of Delusive Exactness (2018)

Haack opens her paper on “future directions” in scientific testimony with a quotation from Aristotle and an observation from a hundred-year old U.S. Supreme Court ruling! Her reason? Her argument will be that the hankering for exactness where no exactness is possible that Holmes identified as “a source of fallacy throughout the law” still impedes us; and, specifically, that it gets in the way of a clear understanding of the proper role of probabilistic and statistical evidence—such as testimony of random-match DNA probabilities in criminal cases, or epidemiological testimony of increased risk in toxic-tort cases—in legal proof of questions of fact. As a result, the law sometimes asks more of statistics than statistics can give, and sometimes gets less from statistics than statistics could give.

“The Pragmatist” [Oliver Wendell Holmes Jr.] (2019)

THE PRAGMATISM AND PREJUDICE OF OLIVER WENDELL HOLMES JR. , 2019

Justice Holmes has been described as a great liberal, a legal realist avant la lettre, a precursor of the “law and economics” movement, a moral skeptic or nihilist, and even as a proto-Nazi and, by one exasperated critic, a “cynical… brutalitarian”; but all these descriptions are, in one way or another, unsatisfactory. Historian of philosophy Max Fisch was much more illuminating when he wrote in 1942 that it had “not yet been shown how close [Holmes] stood to pragmatism.” Showing just how thoroughly pragmatist Holmes’s thought was, however, requires more evidence and more argument than Fisch could provide in his short paper. My goal is to supply that further evidence and further argument. I begin by telling the story of the origin and evolution of the pragmatist tradition, and Holmes’s part in that story—which, as we’ll see, was very far from straightforward (§1). Next, I argue that what united the philosophers of this tradition was, not a body of doctrine, but a congeries of themes, attitudes, and predilections that, despite their great diversity and their many disagreements, they all shared (§2). Then I show that the shared themes and attitudes found in Peirce, James, and Dewey can also be seen at work, in a distinctively legal form, in Holmes (§3); and finally that, once we recognize the pragmatist character of Holmes’s multi-faceted legal thinking, it’s easier to see what’s true but misleading, what’s half-true, and what’s just plain false in that litany of discrepant, one-dimensional classifications with which I began (§4).

When I was doing my bachelor’s thesis, I had to learn–from scratch and without any tutors–how to use Matlab for generating experimental stimuli etc. and how to use R to statistically analyze the data. The thing that held me back from even beginning to get into Matlab and R was all the blogs and such which prefaced their tutorials by warning the reader about the steepness of the learning curve, how there are going to be a plethora of mistakes and everything’s going to be hellish.

From the viewpoint of someone who started to learn those languages from scratch, that wasn’t the experience. Of course, looking back ẗo the scripts I wrote, I now realize they were horrendous. But they did what they had to do; I was able to accomplish my goals. I think it is much much much easier to get “something that works” working than people really make it out to be. This has also been my experience when tutoring others in how to use R. People with no formal training in statistics or programming have been fast to catch the basics, and have been able to conduct their own analyses independently.

Also, I don’t think GUI-based programs are necessarily that much easier to use. I’ve never used JMP, but SPSS was part of the statistical training in my university, and I hope I never have to do anything with it ever again. I think manipulating data, matrices etc. is much easier in R.

And of course as an advocate for free/open-source/etc type of software I’m not amused by someone recommending a commercial software over free, but I do admit that this is an annoying attitude, sorry for that.

]]>Darn my last post got lost. Anyway wouldn’t you say that your generation was influenced by the Evidence Based Medicine Movement that burgeoned in the 80’s, with the work of David Sackett, Archi Cochrane, David Eddy, and others.

https://en.wikipedia.org/wiki/Evidence-based_medicine

See also John Ioannidis’ take on the trajectory of the movement.

https://www.ncbi.nlm.nih.gov/pubmed/26934549

And of course John has since assailed the use of p-values in biomedical research.

]]>Is it possible that some physicians, epidemiologists, statisticians go into the burgeoning 80’s Evidence Based Medicine EBM movement and circles. Even I came across the work of David Eddy, Archibald Cochrane, David Sackett, and Alvin Feinstein.

I am suggesting that your generation may have been in the majority in being more critical of mainstream research theories and practices. You point to David F Andrews, who I haven’t come across. Nevertheless the years 60s’ thru late 90s’ stand out for the number of critical articles and books about clinical epidemiology/trials.

https://en.wikipedia.org/wiki/Evidence-based_medicine

John Ioannidis seems to suggest that the EBM movement has been hijacked.

]]>If people were more open and talked about, I think we would find you are the majority.

Wish I had kept a log, after doing an MBA which included being part of the Phd Marketing seminar in the second year, then four months as a research assistant programming an automated simplified hierarchical modelling analysis of OECD industry data for multiple countries (e.g. US, Canada and Japan) – well at least panel plots of all industries multi-year smooth trajectories within each country, also having been a student member of the Toronto Semiotic Circle and having attend their summer research program (e.g. drinking beer with Umberto Eco, JG Gardin, Rene Thom and others) – I entered a Biostatistics MSc in 1984.

Now in 1986 was able to develope methods for meta-analysis – L’Abbe KA, Detsky AS, O’ROURKE K. Meta analysis in clinical research. – which I think is still is reasonable today. But then until 1988, I summarized and ran everything by David F Andrews. Maybe by 1990 I was a stand alone competent statistical for enabling clinical research but would stuff stuff by David if he had the time.

My continuing deficiencies became more aware to me in preparing ASA continuing Ed courses on Meta-analysis that I gave at JSM in 1998 and 1999. I did not get caught there but Brian Ripley did catch me later – Meta-Analysis and the Quantitative Investigation of Replication. S-PLUS User’s Conference, 1999 that involved my incomplete understanding of residual likelihood. (Hey I think that was my best title ever?)

My coming of real competency in my own mind was after 2 years at Oxford, back in Ottawa in 2004 being able and very comfortable implementing Yudi Pawitan’s likelihood based 1st order asymptotic analysis of the one way random effect model for use with real data.

]]>Definitely from https://en.wikipedia.org/wiki/The_Metaphysical_Club (the pragmatic grade of a concept).

But I think that does not rule an understanding of uncertainty but rather supports it.

On the other hand, I have not re-read the chapter on how Holmes fit in with American pragmatism.

> nobody thought much about priors back then

That’s a current interest of mine – nobody thought much about priors back then because as there was nothing to think about. All they tried to represent was complete doubt about true parameter values.

Stephen Stigler has argued that F. Galton was the first to think of priors (1880’s) that represent knowledge rather than ignorance – but that it had very little influence on anyone.

I have been looking for documentation that Peirce was or was not aware of Galton’s priors – but no success yet. On the other hand, I have yet to discern arguments against priors that purposefully represent knowledge in his wider philosophy.

]]>Thank you for your thoughtful response. There is very little with which I disagree.

I don’t think any of the recommendations posted here on the blog fall really in the ‘fun popular-science category’. I wasn’t sure if that is what you meant.

You did not appreciate the Cult of Statistical Significance. That’s your prerogative. Maybe it didn’t address your vocational interests.

I found the thesis of the book applicable to the debt/debt ceiling debate that was being conducted in the mid 2000s. The way I would put it is that I had a feel for the topic of ‘debt/debt ceiling b/c I was also following the work of a former Securities & Exchange Commissioner, Roderick Hills; later to head the Hills Governance Program at Center for Strategic & International Studies. How to measure the debt was a question that did surface often.

My emphasis is a little different. I stated it earlier. There is no guarantee that subject matter expertise elicits a critically questioning attitude. You may be in the minority. The claim is that thousands have accepted the standard statistics curricula without little or any exposure to the ongoing controversies shadowing the field.

The student in all probability has either taken a basic statistics course or will. His specific query is about modeling and causal inference, which would require basic statistics course. I just think that the books I recommended were essential for me to contextualize the ongoing queries in statistics.

]]>“The district court again took issue with Dr. [Nicholas] Jewell’s methodology. First, the court expressed concern with Dr. Jewell’s decision to replace the definition of diabetes used by the ASCOT endpoints committee with one of his own. Although Dr. Jewell is well-qualified as a statistician, he’s not a medical doctor or professional, nor does he have any particular expertise in diabetes. The court decided that Dr. Jewell lacked the expertise to “second guess” the judgments of the endpoints committee, and that it was inappropriate for “someone with no clinical expertise [to choose] to replace the adjudication committee’s determination of new-onset diabetes with particular unadjudicated raw data, namely lab values of his choice.”

The court was not at all impressed with that statistical expert witness on a number of points – for what seems like very good reasons.

By the way the mid-p value is a quintessential example of a bright line fetish. For example, if because of discrete outcomes under the null hypothesis, the most extreme might happen .01 and the second most .08 – so the only possible p values are .01 or .01 + .08. The mid-p value is equivalent to flipping an unbiased coin and declaring significant if heads or not if tails.

Under the null that would happen just .05. Now its not actually a coin flip but (something like) the ordering of the outcomes which is just noise under the assumed model. Not clear if the court understood that part in deciding it was acceptable.

]]>The post you link to is about a paper that is sort of the end-result of that whole process, which lasted about 10 long years. I gained a lot from the comments on the blog, and in fact, in the final version of the paper (not public yet) many of the blog-commentators are acknowledged. But the useful comments were from professional statisticians; it’s no surprise I learnt something new from some of the 240+ comments! I wasn’t talking about that. This is the Gelman blog; some 4000+ readers IIRC.

I was talking about fun popular-science books that talk about statistics. I was saying that if this person wants proper training, those books are good to read after five, but for the real stuff you have to go through the formal education process. Maybe coursera-type stuff is enough for some; it wasn’t for me back in 2011 when I started. By getting your hands dirty I mean fitting Stan/JAGS/WinBUGS models, acquiring hands-on experience, struggling with things beyond one’s technical ability.

To even understand Casella and Berger, for example, one needs some preparation, but one gets much more out of it. In one brief section they introduce the probability integral transform, and it is shocking when one realizes what it means for NHST. They never mention it (I think); you have to connect the dots yourself and that realization sinks in deep because it was your own insight. No amount of spelling out in pop-sci books that the p-value distribution is uniform under the null will sink in till you see the proof yourself and work out its implications yourself. My problem was that I just didn’t have any technical preparation for all this; my undergraduate major was Japanese! I suspect this person has the same starting point. It’s gonna be a long journey if he/she really wants to get somewhere.

]]>“we’re valuable”. I must have an aversion to plural verbs. Apologies

]]>So he may have had a good grasp on the uncertainties but could not pass that on.

]]>Here is the thread to which I referred. It’s a good practice to review previous discussions. I thought your exchanges with the other bloggers was valuable.

]]>For me, once I finally figured out how to make graphics that held as many pixels as Dave Justice and Derek Jeter had at-bats I could see, or so I imagined, what was going on. It seemed to me a statistical apples and oranges problem; but again, I’m just standing on my mole-hill trying to see a bit further; being from time to time vaguely annoyed by you lucky peeps who’re standing on taller shoulders.

]]>As for when things will change, it’ll take fewer judges with degrees in political science and more with degrees in the beautiful game (math) to make a difference.

]]>Here is one resource

Statistical Reasoning in the Law and Public Policy, Joseph Gaswirth

]]>Thanks again. Let me say this. If I were to have read that paragraph you quoted, without some exposure to the controversies in statistics, I would be clueless as to how that would be helpful. Then too, plaintiff and defendant lawyers are in the business of convincing the judge or jury of its side: sometimes using very strategic and skilled rhetoric, as you can attest.

As I mentioned earlier, a book length expose of statistical reasoning in courtroom would be fascinating. It may require a collaborative effort. But someone of the intellectual caliber of a Richard Posner may be able to undertake such an endeavor. Judge Posner is on to other things. But I’m sure he is still interested in this topic. Lawrence Tribe might be another. Either one can recommend one of their former students perhaps.

Perhaps you can undertake it as you have the legal and statistical background.

I egged Sander Greenland many months ago. I don’t recall how he answered as it was only a brief mention.

]]>‘Useful’ has many many connotations

http://www.thesaurus.com/browse/useful. Which one/s do you mean to convey?

However, nearly everyone here on the blog has been pointing to practical & theoretical problems in specific contexts: based on their own vocational/practical experiences. They have been getting ‘their hands dirty’.

If the objective is to redo, reinvent, or circumvent those problematic results then it leaves open the sense in which you mean ‘useful. Perhaps you can point to one or two examples of useful efforts.

If you mean that it is necessary to grasp technical knowledge; that is a given. Nevertheless there are aspects to any of the problems raised that are within the grasp of those who may not have extensive technical knowledge. Obviously some fields are highly technical to begin with. It requires putting in the time to learn them. It’s that most entail logical reasoning, which is important to detect as well. Everyone has different competencies [subject matter & logical] in different degrees. There is no assurance that subject matter expertise is necessarily going to yield critically thinking effort. That is why several of the books recommended may be useful.

This discussion reminds of theme that has cropped up now and then: The generalist vs. subject matter expert. And even more fundamentally back to how to improve judgment; qualitatively and quantitatively.

Lastly we have been apprised of some of these problems when Sander debated Carlos here. I’ll have to find the link.

]]>+1

I learned statistics by teaching it, coming from a background in pure mathematics. So I’m probably not in a position to give good advice to people coming from a very different background. others. But I will second Michael’s comments on R vs JMP, since I had virtually no experience coding when I got into statistics.

Also, apps, demos etc. that illustrate the concepts can be very helpful in learning (rather than mis-learning) the ideas. I recall that some of the “Arc” demos that came with Cook and Weisberg’s Regression Including Regression and Graphics were very good for seeing what was really going on. (I don’t know if they’re still available.) More recently, some of the shiny apps by Agresti et al (http://www.artofstat.com/webapps.html) seem quite good for illustrating some concepts and cautions.

]]>Is there a reference on this?

An argument I once heard was too much inertia – roughly (most) law students are unlikely to see statistics as very important as current judges haven’t a clue and so statistical arguments will not be taken seriously. That used to be the case in medicine in 1970s and started to change in the 1980s – clinicians that could implement and understand randomized trials and cost effect analyses started to be more and more promoted above others. Is this likely to be true in law?

]]>“We pause here to provide a brief overview of the concept of statistical significance and its proper role in the courtroom. Statistical significance is a measure of confidence that a trend observed in a dataset is not random. “A study that is statistically significant has results that are unlikely to be the result of random error . . . .” RMSE at 573. Statistical significance is typically expressed through a p-value. “A p-value represents the probability that an observed positive association could result from random error even if no association were in fact present.” Id. at 576 (emphasis removed). To determine whether an association is statistically significant, statisticians compare the p-value to a predetermined threshold value (also known as a significance level). If the p-value is smaller than the significance level, then the finding is statistically significant. Otherwise, it is not. “The most common significance level . . . used in science is .05. A .05 value means that the probability is 5% of observing an association at least as large as that found in the study when in truth there is no association.” Id. at 577 (footnote omitted).”

The opinion also features a discussion of Fisher’s Exact Test, back of the envelope meta-analyses done by non-statisticians and a discussion of why the so-called A. Bradford Hill causal criteria are applied to evidence rather than used to create evidence. The court reached the correct conclusion (if you agree that courtrooms aren’t the best place for discovering the cause of diabetes and other diseases) but all the p-talk, discussions of randomness and whether 0.05 ought or ought not be the boundary line drawn between True and False is depressing. On the other hand, the fact that experts can be found who will on whatever Stats package they use deploy one test after another until they get p less than the magic number AND who then forget to shred the contents of their file drawer so that the p greater than magic number folders are later discovered did make me smile. Here’s a link to the opinion: https://scholar.google.com/scholar_case?case=10965502063415301532

]]>Wow thanks it’s also over 1,000 pages. Very interesting. I will try leisurely amble through them over the next few months. I may have read Lawrence Tribe’s law review article. The thesis sounded familiar.

It would be a fantastic for some enterprising lawyer or legal scholar to undertake a review of cases using statistical reasoning.

In reading the introduction, I wonder whether it is ascertainable that judges/lawyers are better able to grasp the logic or illogic of statistical reasoning.

]]>Obviously different people get different takeaways from a book tool. I liked Cult of Statistical Significance very much. You call in ‘statistical tourism’, but the thesis has a long history, See for example Significance Test Controversy by Ramon Henkel. Anyway tourism can be very informative and helpful. After all we are evaluating whether ‘statistics’ equipped to add anything useful. It’s a journey.

]]>I agree with your assessment of Rex Kline’s book. The 1st part was most helpful. Probably the most clear exposition of misinterpretations. The rest required a very good foundation in basic statistics. Often the examples provided, the graphics, are so boring and sometimes inappropriately displayed.

But in reviewing a couple of other books, the same or similar conclusion can be drawn. Some parts very good; some parts not so good.

I think there needs to be systematic examination of all these basic statistics texts. I believe I mentioned this to John Ioannidis recently. More specifically I noted that a history of statistics reasoning in high school and university undergranduate programs would be beneficial. That is to say that a full semester would be the minimum.

]]>I think that for exploring data, starting with R is a terrible idea, unless one already has lots of experience with point-and-click programs. I’d recommend JMP as a cheap and good and painless program:

https://onthehub.com/download/software-discounts/jmp

If you’re ready to learn R, then by all means do. But if you’re trying to understand statistics, then learning the syntax of R will make things harder (multiplicatively, and not additively, I think).

(Get a data set you’re interested in. If you don’t have one, maybe start with: http://gss.norc.org)

Judea Pearl’s recent book “The Book of Why” is a terrific guide to causation.

]]>The opus which finally cracked the skulls of many Bayesian/statistical concepts for me was Krushcke’s almost similarly named book, Doing Bayesian Data Analysis. The discussion is most of the time really concrete, which might be frustrating for those who are more mathematically inclined, but for me this was the aspect that helped me to understand what was being said. After this I’ve gone back to BDA and have gotten much much more out of it.

Sameera recommended Kline’s Beyond Significance Testing. The first part of the book is fairly illuminating discussion on p-values, misinterpretations of them et cetera, the usual good stuff, but to me as a whole the book reads more like a reference book: I’ve mostly gone back to it to check how a certain confidence interval or whatever should be calculated. There is not much–if I remember correctly–about the practical implementation of the equations and concepts introduced in the book. Of course it is not that hard to implement the equs, even the most difficult are fairly easy to implement in R, and solve them that way, but if the starting point is that one does not know how to do that sort of thing…

Not to say that it’d be a bad recommendation; that is just my own experience with the book.

—

Last, but not least, some Youtube stuff, since we live during the era of the Youtube:

“Some Bayesian modeling techniques in Stan” is something I found to be illuminating about linear models, how to program them in Stan etc., and I think the major points should be comprehensible to pretty much anyone:

https://www.youtube.com/watch?v=uSjsJg8fcwY

Also (did someone else already mention this? Well, why not mention it again): Richard McElreath’s course “Statistical rethinking” is available on Youtube. Really easily followable stuff, for pretty much anyone.

]]>a. I must say I never understood Andrew’s articles until I had gone through formal study. Andrew’s general-audience writing often feels like he assumes you already know some things—which a general audience doesn’t. Now that I know what it’s about (and I’ve been following this blog for 11 years, that helps!), I find Andrew’s general-audience articles very comprehensible. I wouldn’t start with his articles, but rather would end my period of formal study with them.

b. I would completely avoid The Cult of Statistical Significance. It’s an extended twitter-rant. It’s all fine, but all you need is 280 characters to express the point, not a whole book. I read it but it quickly became painful even though I was agreeing with the authors. This book should never have been written.

The other stuff I don’t know much about or don’t have any comments on.

c.

]]>I think reading books on modeling can work if you already have serious technical preparation, but I assume this person does not. I know a few people like this; they would just pick up BDA and say uh huh, uh huh, and then implement something in C or C++ for a real problem they are trying to solve. If you are not at that level, all you will ever be is a statistical tourist.

If you are serious about it, go through formal study. There is nothing like writing deriving an answer to a problem and finding you that you are dead wrong.

]]>morgan and winship, counterfactuals and causal inference

long and freese’s stata book

maybe one of rosenbaum’s observational studies books

(i know some/all of these are not going to be popular here)

]]>The short sad story of statistics and the law is that back in the early 70s rising legal superstar Lawrence Tribe wrote a law review article warning those interested in justice to eschew the dark methods of the Bayesians encamped beyond the gates. The Bayesian legal scholars of the era were (to greatly oversimplify) arguing for something dumb like this:

If plaintiff gets run over by a bus but can’t identify the bus we should hand his bill for lost wages and medical bills to whichever bus company has the most buses in town.

Tribe’s reply (again to greatly oversimplify) was to state what is now the well-known argument against breast cancer and prostate cancer screening given the apparently (but not actually) robust sensitivity/specificity of current screening methods. The result would thus be lots of innocent people made to pay for things they hadn’t done. An instance of the prosecutor’s fallacy tainted by obvious racial animus that had been widely publicized and overturned by the California Supreme Court a year or two earlier (if memory serves) primed the courts to be leery of statistics and Tribe confirmed their suspicions with a well-written rebuke.

Something else that’s important to understand is that when Oliver Wendell Holmes et al were stating the Law, their’s was a clockwork universe and C.S. Pierce wasn’t consulted about variability; whether of an object or its measurement. The result was that the word “risk” rarely appeared for 100 years as the courts were instead arguing about which billiard ball had knocked the 8-ball into the corner pocket and whether the cue ball was also to blame. They couldn’t figure out how something that made predictions about the future could be used to render judgments about the past.

When Stats couldn’t be avoided the courts adopted a frequentist stance and proceeded to pound the square risk peg into the round causation hole by latching on to the idea that “p less than .05 means we can be 95% sure that whatever was ‘tested’ is true and that’s more than adequate for our purposes.” Further evidence of their thinking can be found in the peculiar distinction between what they call “general causation” and “specific causation”. The idea here is that a change in a group mean somehow maps to causation “generally” and that the individuals specifically affected can somehow be precisely identified (usually either by abduction – whereupon a great argument erupts over what things go in the set of possible causes and how they ought to be weighed, or by scrutinizing the relative risk and dichotomizing at 2.0 – making for a double dichotomy as the relative risk putatively discovered must in most jurisdictions be “statistically significant”).

It’s because of the foregoing that I follow as closely as I can the Pearl/Dawid debate. Given that most courts just want to decide a case and move on to the next one my bet is that either DAGs won’t make an appearance for 100 years or the standard docket control order will require both sides’ experts to produce a DAG thereby reducing things to a ritual, much like the sacrifice of reason by the p-value knife.

]]>1. Dicing With Death by Stephen Senn

2. Thinking Fast Thinking Slow Danny Kahneman

]]>