I received a few emails today on bloggable topics. Rather than expanding each response into a full post, I thought I’d just handle them all quickly.

1. Steve Roth asks what I think of this graph:

Screen Shot 2014-02-21 at 9.35.43 PM

I replied: Interseting but perhaps misleading, as of course any estimate of elasticity of -20 or +5 or whatever is just crap, and so the real question is what is happening in the more reasonable range.

2. One of my General Social Studies colleagues pointed me to this report, writing “FYI – some interesting new results, using the linked GSS-NDI data. I trust this study will raise some heated discussions.” Another colleague wrote, “I’m rather skeptical of the result but at least they spelled GSS right.”

The topic is a paper, “Anti-Gay Prejudice and All-Cause Mortality Among Heterosexuals in the United States,” published by Mark Hatzenbuehler, Anna Bellatorre, and Peter Muennig.

My reaction: Yes, it seems ludicrous to me. Especially this:

The researchers wanted to make sure they were really seeing a link between earlier death and anti-gay prejudice — not something else that might be associated with being anti-gay — so they controlled for variables that could have confounded the results, including age, income, education, marital status, gender, religiosity, and even racial prejudice.

I love the way, in a study of “who had died by the end of the study period,” the factor “age” is just listed as one among many control variables. Of course the results will be completely sensitive to the modeling of age. I can’t be sure but it looks like they just included age as a linear factor. Given the huge variation of antigay prejudice by age and cohort, I can’t take this sort of analysis at all seriously. I’d think the American Journal of Public Health could do better than this!

3. Somebody else wrote:

My colleague sent me this ridiculous article but this might not pass the smell test and it might be silly for you to write about an obviously crap paper. But it is published in a peer reviewed journal, and mentioned in a reputable newspaper, plus it is right along the lines of what you’ve been writing about recently. So what do you think?

The news article is entitled, “Physicists are more intelligent than social scientists, paper says,” and continues: “The difference is statistically significant only for physics and political science. But the paper’s co-author, Edward Dutton, adjunct professor (docent) in anthropology at the University of Oulu in Finland, said that the smaller differences between other subjects ‘went the same way’ . . . Dr Dutton admitted that a ‘niggle’ of doubt remained, which required replication with a larger sample to eliminate. However, many data problems that he had anticipated ‘didn’t seem to be that problematic’ when the paper was peer-reviewed.”

A “niggle,” huh? Seems a bit early to break out the N-word, no?

Also this weird bit, “‘Intelligence and religious and political differences among members of the US academic elite’, published in the Interdisciplinary Journal of Research on Religion, draws primarily on a 1967 study of 148 male academics at the University of Cambridge. . . .” Huh?

In any case, sure, I agree that physicists are more intelligent than social scientists, on average. That’s obvious. You don’t need to analyze a 47-year-old survey to tell me that! But the part I really loved was when the author of the study told me he didn’t trust his statistics until they were peer-reviewed! That’s pretty scary.

4. I received the following press release in the inbox:

Converge Consulting Group Inc. Partner Robert Gerst, will be presenting, What Matters: How statistical significance demolishes productivity, competitiveness, and effective public policy, at the 20th Annual International Deming Research Seminar . . .

This year marks the twentieth anniversary of The Bell Curve, a publishing phenomena claiming to provide scientific evidence of significant differences in intelligence among human races. . . . Gerst details the statistical confidence trick of The Bell Curve that so successfully duped the public, leading media and news outlets . . .

Gerst will show how this same confidence trick, misusing statistical significance to mean practical importance, is corrupting: (i) academic research in psychology, biology, ecology, education, health, economics, medicine, (ii) business analysis, including market surveys, customer research, process improvement efforts, operational & organizational analyses, employee engagement research, and (iii) government studies, including public accountability reporting, policy research, and program evaluation.

“Executives and HR departments, for example, use the same junk-science as The Bell Curve, to measure employee engagement and improve productivity, and then wonder why productivity and engagement drop,” said Gerst. “Billions have been spent on, Value Added Assessments in education, Six Sigma programs in business, performance evaluation and accountability reporting in government. They’re playing the same statistical con as The Bell Curve and getting the same quality of results.”

No comment.

34 thoughts on “Quickies

  1. “All the high-statistical-significance studies put elasticity at zero”

    Is there a way that sentence can make sense? Perhaps he meant “large-sample studies”. In which case I think it is not true, but I am about to follow up…

  2. Sorry, I skimmed past a sentence before posting the above — obviously, he did mean something like “power” by “statistical significance”, which is probably fine for a newspaper column.

    But we should be appalled by the graph and its use, an abuse of “significance” and its visual analog that deserves to be held up as bad example in future textbooks.

    If the elasticity being estimated is below -1, there is essentially no upside to increasing the minimum wage, since in that case the workers affected would suffer a reduction in income (because the time spent out of work would outweigh the raise when employed). Between zero and one we face a value judgment as to whether the increased pay is worth the increased unemployment. Personally, I would never support an increase if I though the elasticity larger than -1/3, but perhaps other equally informed people will say -1/2, or -0.1.

    So that’s the range that defines the economic, or human, significance of the estimate. Putting it on a graph with the axis from -20 to +5 means the difference between no-brainer-must-not-raise-it range (0) looks tiny. That is bad graphing. And to go on to say “the chart shows, the employment impact …clumps around zero,” is something worse.

    • The CBO report estimates that the absolute price elasticity of demand for low-wage work is about .1, that is” a >30 percent increase in the minimum wage would reduce low-wage employment about 3 percent. from 17M to 16.5M, other things equal.

      • not quite… I guess while it’s possible to misinterpret badly the meaning of p-values and NHST results, it’s also possible to overstate wildly the case against them. Storytelling and wishful thinking abound in all directions. Sigh. I hope the conference attendees enjoy the presentation.

        • While it remains to be studied in detail, it would not surprise me if hundreds of billion dollars of waste could be attributed to bad decisions arrived at via NHST use each year. I also do not understand what is being overstated (not having seen the presentation). Perhaps the same people would just misuse any other statistical tool the same way, but NHST is the one being taught and used now despite its well documented flaws.

          I don’t see how the current climate of using NHST along with publication bias and p-hacking techniques is any different than measuring prevailing opinion. Your opinion is worth more the more funding you can get (use nil null hypothesis and increase sample size, etc).

        • “it’s possible to misinterpret badly the meaning of p-values and NHST results”

          it’s possible? I would say it’s ubiquitous, as in, any situation in which the it’s questionable whether the null model captures the underlying noise (wishful randomization). This is the case for a very large fraction of studies where randomization is not under the experimenter’s control.

          Also, how widespread is the belief that an observed p-value is the probability of a type-I error?

  3. “They’re playing the same statistical con as The Bell Curve and getting the same quality of results.”

    Statistics is, in essence, organized noticing. And acts of noticing, such as The Bell Curve, are highly unfashionable these days.

    • Statistics is different from noticing things that support my hypothesis while ignoring those that support yours. Also, it is wholesale, honest, complete “noticing”. Not a swallow that makes the summer.

      One anecdote does not statistics make. A general critique of your brand of “organized noticing” not of The Bell Curve per se.

      • If psychologists are looking for replicability, then IQ is their greatest accomplishment. The Bell Curve’s chapters on the National Longitudinal Study of Youth 1979 are now two decades old. We have 20 years more data on the 12,000 or so subjects and even on thousands of their children. If The Bell Curve’s findings aren’t replicable, you would have heard about it by now.

  4. The physicists are smarter article is silly in a number of ways but stating that all of the effects are in the same direction is not one of them. Non-significant effects in the same direction as a significant finding provide additional evidence for that finding.

      • Hey, this econophysics stuff is really cool. I especially like the sound of the word. Someone should start a new field, linguiphysics. Their students and grand-students could then branch this off to theoretical linguiphysics and computational linguiphysics, or maybe even experimental linguiphysics.

        • Andrew’s colleque at Columbia is fighting the good fight on that one:


          At this point, even if String Theory did have a correct experimental prediction under it’s belt, I’d still consider it major proof of how stale physics has become.

          Anyone who believes that 1960-2014 was anything like as productive a time in physics as 1900-1960 needs to retire. Anyone who believes this and cites as evidence how many more physics papers were published in 1960-2014 then 1900-1960 needs to be fired.

          There’s no doubt in my mind that the reason Physicists are en mass are embarrassing themselves in other peoples fields is because Physics is boring now. I’ve personally known quite a few Solid State types who went to biophysics for just that reason.

        • I don’t think such a comparison is very meaningful. Once lower hanging fruit are plucked obviously the going gets harder. Diminishing returns and all that. Doesn’t mean that today’s physicists are any stupider. The problems just get harder.

          Besides, there’s a lot more guys doing physics than used to be so obviously competition’s more intense too.

        • I didn’t say anything about Physicists getting stupid. How did you read that out of my remarks?

          Maybe we have reached the limit of what humans can do in physics. Maybe we haven’t. But that “low hanging fruit” stuff is just pure speculation and a highly convenient claim for any field that has gown stale.

        • I just looked at the list of Nobel Prize winners in Physics for the last 60 or so years. The period from 1900-1960 was phenomenally productive but there’s been a lot of great work since and there’s also a lot of great work going on now. (The extent of my knowledge of contemporary physics research: I read Physics Today and try to stay abreast of developments in “applied quantum mechanics” – see, e.g, https://www.youtube.com/watch?v=wGkx1MUw2TU&feature=player_embedded – but I no longer follow the field in detail.) Physics is a very diverse field. That said, it seems to me that there’s a bias as to what constitutes “real physics”, i.e., what areas of study are taken seriously enough by the powers that be that a freshly-minted asst prof could get tenure. My impression is that String Theorists wield a disproportionate amount of power in top-tier academic departments. Not only is there the issue of the field’s own demerits, but there’s also the issue of crowding out. What work isn’t getting done in physics depts that could be getting done there because resources are going to unproductive areas instead*? For example, what fraction of atmospheric physics and climate science research is actually done in physics depts? (Climate science has a great deal of contemporary relevance, no?) My sense is that it’s pretty low. Physicists should own scattering theory, radiative transfer modeling, atmospheric circulation models, and other research areas which are at the core of contemporary climate science. But they don’t. As a field, physics has pretty much punted on them. The work is physics but, by and large, it doesn’t get done in physics depts. That’s a lost opportunity. To pile on a bit more, to what extent are physicists who work in physics depts (as opposed to other academic depts) pushing for new Earth-observing satellites or at the forefront of analyzing remote sensing data? Point the telescope out into deep space and they’re on it but point it at the Earth and… meh… not much interest. Again, a lost opportunity.

          *A little more String Theory bashing: Nevermind correct experimental predictions, can String Theory even claim any incorrect ones? Isn’t that a gut check for any scientific field – the ability to formulate an experimentally testable hypothesis?

        • @ChrisG:

          I think what has happened is that a lot of the stuff that used to be done in the Physics Dept. in the past & which turned out quite practically useful has slowly gotten pushed into offshoot departments of its own. e.g. Material Sci, Metallurgy, Nanotechnology, Thermal Engineering, Atomospheric Sciences etc.

          Consequently what remains in the Physics Dept. is the extremely pure & fundamental.

        • @Rahul

          > Consequently what remains in the Physics Dept. is the extremely pure & fundamental.

          That’s the state of things. Physics depts should be doing fundamental work but to limit themselves to fundamental work is to pass on a lot of opportunities to make significant contributions to science.

        • Rahul,

          My experience with physics was exactly the opposite of what you describe. While math departments have shed successful areas to become departments in the own right (Notably Computer Science and Statistics), that’s not what I’ve seen in Physics at all.

          “Consequently what remains in the Physics Dept. is the extremely pure & fundamental.”

          Again, that’s not what saw. About half the physicists I’ve known left physics, for biophysics, economics, finance, computational science, or whatever. There may be a “physics” still left in their titles, but they’ve effectively left. The other half who staid in Physics proper have been reduced to “Quantum Engineers”, applying QM the same way EE’s apply Electricity & Magnetism, and ME’s apply mechanics. A small smattering engage in what increasingly looks to me like pure made up mysticism cloaked in Differential Geometry or whatnot.

        • @Entsophy:

          It’s hard to argue against anecdotes but I really think you are grossly mis-characterizing the Physics Departments of today. Assuming you are indeed saying they are staffed by “Quantum Engineers plus a smattering of mysticism”.

          On a whim I sampled Notre Dame’s dept. : 9 faculty self-identify as astrophysicists, 4 do experimental atomic physics, 10 condensed matter experimentalists, 10 are experimental nuclear physicists, 13 are experimental particle physicists, 5 do both nuclear theory & experiments, and ~5 work on miscellaneous other topics such as Biophysics and Radiation.

          Now that still leaves us with 2 working on atomic theory, 3 on condensed matter theory, 2 on nuclear theory, and 6 on particle theory.

          Now my question to you is which of these you count as the mystics & which as “quantum engineers”?

        • Rahul,

          My experience in Physics is fairly recent. Nuclear Physics, Atomic Physics, Condensed Matter physics and so on are just applied QM. Astrophysics is applied Relatively, E&M, and whatever else, but it’s applying theories from Physics. Biophysics isn’t discovering new physics either.

          Particle physicists are the only ones on your list who are (arguably) adding to our storehouse of fundamental theories, but unlike Mechanics, Thermodynamics, E&M, QM, and Relativity, you’d be hard pressed to find anything they’ve done which someone outside of physics would ever care about (except the mystics of course).

          Obviously they do discover new effects sometimes. But then so do EE’s and ME’s.

        • @Entsophy

          That’s an interesting perspective. Two questions:

          (1) Those areas you label as “Applied QM”; what exactly is your critique of them again? Is it that they are stale / boring / unproductive? (Your words but not sure if you intend those for these areas in particular?)

          (2) If you did want to make a Physics dept. a powerhouse of new fundamental ideas what branches of physicists would you stock it with?

        • Rahul,

          I don’t have any critique of them. But if Physicists in a stale period for physics are faced with the choice of (1) leave physics, (2) Quantum Engineering, or (3) Mysticism, then many Physicists will choose (1) because many have little inclination toward engineering or mysticism.

          I’m claiming that’s why so many of them can be found invading other peoples fields.

  5. “Physicists are en mass are embarrassing themselves in other peoples fields is because Physics is boring now”

    Although I must say that I know at least one physicist who does great work in psychology; there are probably quite a few more. But then there are the others, and it’s a truly amazing fact that German taxpayer money (and a lot of it) is going into funding them.

  6. On homophobia, my suspicion is that homophobia is functioning as an interaction term. Given homophobia is likely to be strongly correlated with socio-economic indicators, I suspect the study is just finding that poor and badly educated people are even more likely to be unhealthy, which is no surprise.

  7. Just to say, I was actually wondering what you thought of that graphic, not so much that graph. As in, is that way of displaying many studies a useful way to view a review of the statistical literature on a subject? Seemed really good to me, especially combined with the article’s analysis of statsig bias among the studies.

    But yes, zooming in and analyzing this graph’s more credible data points would be necessary and useful.

Leave a Reply

Your email address will not be published. Required fields are marked *