Skip to content
Search results for measurement

Update on social science debate about measurement of discrimination

Dean Knox writes: Following up on our earlier conversation, we write to share a new, detailed examination of the article, Deconstructing Claims of Post-Treatment Bias in Observational Studies of Discrimination, by Johann Gaebler, William Cai, Guillaume Basse, Ravi Shroff, Sharad Goel, and Jennifer Hill (GCBSGH). Here’s our new paper, Using Data Contaminated by Post-Treatment Selection?, […]

The importance of measurement in psychology

Gunter Trendler (whose work on measurement in psychology we discussed last year) writes: I’m aware that theoretical work on measurement is perceived by most psychologists as merely “philosophical” and therefore ignored. Recently I’ve stumbled over a publication [Measuring Unobserved Variables in Educational Inequality Research: Mathematics and Language ‘Abilities’ in Early Childhood, by Alejandra Rodriguez] which […]

Controversies in the theory of measurement in mathematical psychology

We begin with this email from Guenter Trendler: On your blog you wrote: The replication crisis in social psychology (and science more generally) will not be solved by better statistics or by preregistered replications. It can only be solved by better measurement. Check this out: Measurement Theory, Psychology and the Revolution That Cannot Happen (pdf […]

The importance of talking about the importance of measurement: It depends on the subfield

An interesting point came up in a comment thread the other day and you might have missed it, so I’ll repeat it here. Dan Goldstein wrote to me: Many times I’ve heard you say people should improve the quality of their measurements. Have you considered that people may be quite close to the best quality […]

Can we do better than using averaged measurements?

Angus Reynolds writes: Recently a PhD student at my University came to me for some feedback on a paper he is writing about the state of research methods in the Fear Extinction field. Basically you give someone an electric shock repeatedly while they stare at neutral stimuli and then you see what happens when you […]

Toward better measurement in K-12 education research

Billy Buchanan, Director of Data, Research, and Accountability, Fayette County Public Schools, Lexington, Kentucky, expresses frustration with the disconnect between the large and important goals of education research, on one hand, and the gaps in measurement and statistical training, on the other. Buchanan writes: I don’t think that every classroom educator, instructional coach, principal, or […]

Gaydar and the fallacy of objective measurement

Greggor Mattson, Dan Simpson, and I wrote this paper, which begins: Recent media coverage of studies about “gaydar,” the supposed ability to detect another’s sexual orientation through visual cues, reveal problems in which the ideals of scientific precision strip the context from intrinsically social phenomena. This fallacy of objective measurement, as we term it, leads […]

Pizzagate: The problem’s not with the multiple analyses, it’s with the selective reporting of results (and with low-quality measurements and lack of quality control all over, but that’s not the key part of the story)

“I don’t think I’ve ever done an interesting study where the data ‘came out’ the first time I looked at it.” — Brian Wansink The funny thing is, I don’t think this quote is so bad. Nothing comes out right the first time for me either! World-renowned eating behavior expert Brian Wansink’s research has a […]

It’s not enough to be a good person and to be conscientious. You also need good measurement. Cargo-cult science done very conscientiously doesn’t become good science, it just falls apart from its own contradictions.

Kevin Lewis points us to a biology/psychology paper that was a mix of reasonable null claims (on the order of, the data don’t give us enough information to say anything about XYZ) and some highly questionable noise mining supported by p-values and forking paths. The whole thing is just so sad. The researchers are aware […]

Should we continue not to trust the Turk? Another reminder of the importance of measurement

From 2013: Don’t trust the Turk From 2017 (link from Kevin Lewis), from Jesse Chandler and Gabriele Paolacci: The Internet has enabled recruitment of large samples with specific characteristics. However, when researchers rely on participant self-report to determine eligibility, data quality depends on participant honesty. Across four studies on Amazon Mechanical Turk, we show that […]

Measurement error and the replication crisis

[cat picture] Alison McCook from Retraction Watch interviewed Eric Loken and me regarding our recent article, “Measurement error and the replication crisis.” We talked about why traditional statistics are often counterproductive to research in the human sciences. Here’s the interview: Retraction Watch: Your article focuses on the “noise” that’s present in research studies. What is […]

Two unrelated topics in one post: (1) Teaching useful algebra classes, and (2) doing more careful psychological measurements

Kevin Lewis and Paul Alper send me so much material, I think they need their own blogs. In the meantime, I keep posting the stuff they send me, as part of my desperate effort to empty my inbox. 1. From Lewis: “Should Students Assessed as Needing Remedial Mathematics Take College-Level Quantitative Courses Instead? A Randomized […]

In Bayesian regression, it’s easy to account for measurement error

Mikhail Balyasin writes: I have come across this paper by Jacob Westfall and Tal Yarkoni, “Statistically Controlling for Confounding Constructs Is Harder than You Think.” I think it talks about very similar issues you raise on your blog, but in this case they advise to use SEM [structural equation models] to control for confounding constructs. […]

Balancing bias and variance in the design of behavioral studies: The importance of careful measurement in randomized experiments

At Bank Underground: When studying the effects of interventions on individual behavior, the experimental research template is typically: Gather a bunch of people who are willing to participate in an experiment, randomly divide them into two groups, assign one treatment to group A and the other to group B, then measure the outcomes. If you […]

Moving statistical theory from a “discovery” framework to a “measurement” framework

Avi Adler points to this post by Felix Schönbrodt on “What’s the probability that a significant p-value indicates a true effect?” I’m sympathetic to the goal of better understanding what’s in a p-value (see for example my paper with John Carlin on type M and type S errors) but I really don’t like the framing […]

Richard Feynman and the tyranny of measurement

I followed a link at Steve Hsu’s blog and came to this discussion of Feyman’s cognitive style. Hsu writes that “it was often easier for [Feynman] to invent his own solution than to read through someone else’s lengthy paper” and he follows up with a story in which “Feynman did not understand the conventional formulation […]

Measurement is part of design

The other day, in the context of a discussion of an article from 1972, I remarked that the great statistician William Cochran, when writing on observational studies, wrote almost nothing about causality, nor did he mention selection or meta-analysis. It was interesting that these topics, which are central to any modern discussion of observational studies, […]

Subtleties with measurement-error models for the evaluation of wacky claims

Paul Pudaite writes: In the latest Journal of the American Statistical Association (September 2014, Vol. 109 No. 507), Andrew Harvey and Alessandra Luati published a paper [preprint here] — “Filtering With Heavy Tails” — featuring the phenomenon you had asked about (“…(non-Gaussian) models for which, as y gets larger, E(x|y) can actually go back toward […]

Measurement error in monkey studies

Following up on our recent discussion of combative linguist Noam Chomsky and disgraced primatologist Marc Hauser, here are some stories from Jay Livingston about monkey research. Don’t get me wrong—I eat burgers, so I’m not trying to get on my moral high horse here. But the stories do get you thinking about measurement error and […]

Charles Murray’s “Coming Apart” and the measurement of social and political divisions

Following up on our blog discussions a year ago, I published a review of Charles Murray’s recent book, “Coming Apart,” for the journal Statistics, Politics, and Policy. I invited Murray to publish a response, and he did so. Here’s the abstract to my review: This article examines some claims made in a recent popular book […]