Skip to content
Search results for \"difference between significant and non-significant is not\"

A potential big problem with placebo tests in econometrics: they’re subject to the “difference between significant and non-significant is not itself statistically significant” issue

In econometrics, or applied economics, a “placebo test” is not a comparison of a drug to a sugar pill. Rather, it’s a sort of conceptual placebo, in which you repeat your analysis using a different dataset, or a different part of your dataset, where no intervention occurred. For example, if you’re performing some analysis studying […]

The difference between significant and non-significant is not itself statistically significant

One of my favorite points arose in the seminar today. A regression was fit to data from the 2000 Census and the 1990 Census, a question involving literacy of people born in the 1958-1963 period. The result was statistically significant for 2000 and zero for 1990, which didn’t seem to make sense, since presumably these […]

On deck through the rest of the year

July: The Ponzi threshold and the Armstrong principle Flaws in stupid horrible algorithm revealed because it made numerical predictions PNAS forgets basic principles of game theory, thus dooming thousands of Bothans to the fate of Alderaan Tutorial: The practical application of complicated statistical methods to fill up the scientific literature with confusing and irrelevant analyses […]

“However noble the goal, research findings should be reported accurately. Distortion of results often occurs not in the data presented but . . . in the abstract, discussion, secondary literature and press releases. Such distortion can lead to unsupported beliefs about what works for obesity treatment and prevention. Such unsupported beliefs may in turn adversely affect future research efforts and the decisions of lawmakers, clinicians and public health leaders.”

David Allison points us to this article by Bryan McComb, Alexis Frazier-Wood, John Dawson, and himself, “Drawing conclusions from within-group comparisons and selected subsets of data leads to unsubstantiated conclusions.” It’s a letter to the editor for the Australian and New Zealand Journal of Public Health, and it begins: [In the paper, “School-based systems change […]

On deck through the first half of 2018

Here’s what we got scheduled for ya: I’m with Errol: On flypaper, photography, science, and storytelling Politically extreme yet vital to the nation How does probabilistic computation differ in physics and statistics? “Each computer run would last 1,000-2,000 hours, and, because we didn’t really trust a program that ran so long, we ran it twice, […]

Chasing the noise

Fabio Rojas writes: After reading the Fowler/Christakis paper on networks and obesity, a student asked why it was that friends had a stronger influence on spouses. In other words, if we believe the F&C paper, they report that your friends (57%) are more likely to transmit obesity than your spouse (37%) (see page 370). This […]

Top 10 blog obsessions

I was just thinking about this because we seem to be circling around the same few topics over and over (while occasionally slipping in some new statistical ideas):

Controversy over the Christakis-Fowler findings on the contagion of obesity

Nicholas Christakis and James Fowler are famous for finding that obesity is contagious. Their claims, which have been received with both respect and skepticism (perhaps we need a new word for this: “respecticism”?) are based on analysis of data from the Framingham heart study, a large longitudinal public-health study that happened to have some social network data (for the odd reason that each participant was asked to provide the name of a friend who could help the researchers locate them if they were to move away during the study period.

The short story is that if your close contact became obese, you were likely to become obese also. The long story is a debate about the reliability of this finding (that is, can it be explained by measurement error and sampling variability) and its causal implications.

This sort of study is in my wheelhouse, as it were, but I have never looked at the Christakis-Fowler work in detail. Thus, my previous and current comments are more along the lines of reporting, along with general statistical thoughts.

We last encountered Christakis-Fowler last April, when Dave Johns reported on some criticisms coming from economists Jason Fletcher and Ethan Cohen-Cole and mathematician Russell Lyons.

Lyons’s paper was recently published under the title, The Spread of Evidence-Poor Medicine via Flawed Social-Network Analysis. Lyons has a pretty aggressive tone–he starts the abstract with the phrase “chronic widespread misuse of statistics” and it gets worse from there–and he’s a bit rougher on Christakis and Fowler than I would be, but this shouldn’t stop us from evaluating his statistical arguments. Here are my thoughts: