Naomi Wolf and David Brooks

Palko makes a good point:

Parul Sehgal has a devastating review of the latest from Naomi Wolf, but while Sehgal is being justly praised for her sharp and relentless treatment of her subject, she stops short before she gets to the most disturbing and important implication of the story.

There’s an excellent case made here that Wolf’s career should have collapsed long ago under the weight of her contradictions and factual errors, but the question of responsibility, of how enablers have sustained that career, and how many other journalistic all-stars owe their successes to the turning of blind eyes.

For example, Sehgal’s review ran in the New York Times. One of, if not the most prominent voice of that paper is David Brooks. . . .

Really these columnists should just stick to football writing, where it’s enough just to be entertaining, and accuracy and consistency don’t matter so much.

49 thoughts on “Naomi Wolf and David Brooks

  1. Wrote that way too late at night. Left a few word out.

    “but the question of responsibility, of how enablers have sustained that career, and how many other journalistic all-stars owe their successes to the turning of blind eyes is ignored.”

  2. Well, I’ve gotten into the bourbon so this is likely to be more inane than usual – but here goes.

    This post, to which I respond, and that entitled “How statistics is used to crush (scientific) dissent” have, I think, something in common. That something is the power of storytellers and their propensity (a word I learned to use in my first stats MOOC!) to hijack statistics in order to tell a compelling yet false tale.

    Early in the latter post the word “power” appears, and here I (or more likely the bourbon) propose to explicate it.
    What power is being wielded? It seems to me to be the ability to weave a compelling tale out of percentages, probabilities and perils. Such are the skills of the experts I have combated for decades, and this piece, “The Unbearable Asymmetry of Bullshit” https://quillette.com/2016/02/15/the-unbearable-asymmetry-of-bullshit/, captures their tactics.

    So-called subject matter experts make millions annually (in my world anyway) by being able to recite by rote a complaint about every paper that refutes their claims (“power, alas, was too slight to have found what is clearly there”) while stacking up (“I didn’t know you could stack **** that high”) citation after citation from hundreds of papers that purportedly (p less than point oh-five) undergird their opinions.

    But what gives their bullshit power? When I read the post: “How statistics is used …” what immediately came to mind was the Great and Powerful Oz.

    For 70+ years Toto (from pre-Rozeboom to post-Gelman, in all Toto’s incarnations) has been pulling aside the curtain to reveal the humbug that is significance testing. And yet it hasn’t only been the humbugs pulling back the curtiain. It’s been ordinary people. The consumers of statistics. People who yearn for certainty. They demand it. They want there to be a great and powerful Oz. Thus those, like Brooks, who spin yarn from numbers and sell it to make a handsome living by meeting demand with supply do so by stepping into the role of The Wizard of Oz … to peddle … humbug.

    • “storytellers…hijack statistics in order to tell a compelling yet false tale.”

      Anyone can tell a false tale without ever telling a lie, statistics or not. Simply choose what to mention and what to leave out, construct the narrative with a little care, and help people along the line to come to the “right” conclusion, without getting into the dirty and culpable business of lying.

    • Law schools should require a semester teaching statistics with an emphasis on the perils of the p-value. You would think law schools of all places would teach would-be lawyers how to weigh evidence. Instead you spend forever on what is hearsay.

        • Each side’s lawyer will push what it sees are the facts, I get it. But the judge, who is trained as a lawyer, needs to be able to sift through all that to determine what is a fact and then to weigh it appropriately.

      • Two things. First, I see I mangled my attempted metaphor. I intended to write that a lot of ordinary and often smart people want to close the curtain that Toto opened because they want to believe that simple truths can be had by asking simple questions of the great and powerful oracle known as NHST. Was coming off a jury win so there’s that excuse too. Second, sampling, significance testing, regressions, etc. are now a part of almost every case I handle. In fact, reflecting upon cases I had as a young lawyer practically all of them required data analytics; I just didn’t know it.

        What we did then and what most lawyers still do is hire subject matter experts with impressive cvs, a sage-like demeanor and a command of the literature. Only rarely did anyone question the numbers or analyses in the papers cited by the experts. Instead they used them like trump cards. Expert1: “Here’s a paper showing no effect.” Expert2: “Here’s a paper showing a significant effect with a sample population 10x bigger!”. Etc.

        Nowadays however the battles are often over data and they begin before the evidence is even gathered. Yet, if any of the little insights that I’ve gathered here are correct, it appears that the courts are busily creating a very bizarre statistical jurisprudence; one of which future lawyers ought be disabused. Here’s a recent example with the facts changed to protect the identities of courts, lawyers and parties.

        Defendant is accused of selling a defective product. Plaintiffs want to discover whether Defendant knew the product was defective before it was sold and want to review all docs, data, emails, etc. including all metadata from or in the possession of anyone who developed, marketed, approved, etc. the product. Defendant says there is some ungodly volume of data that would satisfy such a request, that it would cost millions to cull through for relevance, privilege, HIPAA …, and that this is all just a scheme to shake down Defendant by imposing litigation costs so enormous that it’s cheaper to pay bogus claims. Plaintiffs, who want their money now and anyway don’t want to spend it on an army of contract lawyers hired to read through whatever mountain of docs is produced in hopes of finding a smoking gun suggest *drum roll* a sampling strategy!

        A lawyer at the firm running the discovery aspect of the case sent around an email asking if we on the trial side were familiar with a good document discovery company who could also provide an expert to testify that 1% is the correct “sample percentage” and whether we were aware of any good cases to use to contest Plaintiffs’ claim that 30% is the correct sample percentage. I should have looked away but curiosity got the better of me.

        It didn’t take long to find a number of cases that reached this conclusion: “The right balance is struck by providing Plaintiffs’ discovery of a statistically significant sample.” That actual quote from a published opinion is the belief that now drives a sort of rule of thumb whereby for large data sets 5%-10% is “a statistically significant sample” whereas for smaller ones 30% is needed for “a statistically significant sample”. Heeding what I’ve learned from Greenland, Senn, et. al. I looked further to see if anyone had bothered to ask questions like “what is the chance that there’s a needle in any given haystack?”, “what would be the value of the needle you hope/fear to find?” or “what would you be willing to pay to find that needle?” No luck.

        So (bringing this over-long post to a sudden close) I got fired up, wrote up a reply email with my thoughts about how they could craft a more compelling argument for a better approach, suggested experts who could explain how this might more sensibly be done and then made a shiny app with sliders (e.g. p(needle)) so they could demonstrate to the court how the answer for sample size might change given power and e.g. the cost the parties would be willing to incur given what they stand to gain/lose if the needle is found … all pro bono though that client hardly needs it. This is the reply that came back: “Our (mutual) client (an in-house attorney) just wants a case and/or an expert that says 1% is the correct sample percentage for statistical significance.” Later someone else in the chain responded that he had identified an expert who would so testify.

        Thus another reason, perhaps, why law schools IMHO ought to at least offer as an elective a course in decision theory that deals with significance testing in at least modest detail. And yes, this episode has thoroughly p****d me off. Thanks for providing space for me to vent.

        • And then there’s the question: Suppose someone in a jury pool mentions that they are statistically literate. Suppose that an attorney, in voir dire, asks them a question about how large a sample is needed to be considered “statistically significant”, and the prospective juror gives a response pointing out the problems with the question. Is the probability that the prospective juror gets on the jury ever greater than zero?

        • Something close once happened to me. There was a biostatistician post doc on our jury panel from a notoriously Lefty school in California and looked like he’d fit in well (that or it had fitted him). During voir dire he said the expected anti-corporate sorts of things (probably to get out of serving). But I left him on and luckily for me plaintiff’s counsel didn’t strike him.

          The plaintiff’s case turned on whether the cancer from which he suffered was a particular rare one. His expert was relying on immunohistochemical staining to prove that it was. Certainty being the topic of the day, if you’re not familiar with IHC staining they can be very useful but, at least at that time, far from perfect discriminators. My defense was that given that there was no other confirmatory evidence, that whatever cancer he had looked to his pathologists like any of several much more common metastatic tumors, and the false positive rate for the IHC stain in question that his was very unlikely to be the rare one he asserted.

          Plaintiff’s expert claimed to be highly skilled in looking at stains and determined that this one “leaves no doubt”, that my expert’s calculations applied only to average diagnosticians, and that his skills were far above the average (and he had a couple of awards on his c.v. that he held up as evidence for the claim). The biostatistician turned out to be the foreman and happily for me, as I learned from a post-trial interview, was as skeptical of the boasts of physicians as he was those of corporations.

        • I’ll offer my own jury story of how juries can sometimes choose foremen whom one might not predict, but who are very good choices:

          I was on a jury of six people, five white and one black. The defendant and the guy he hit were both white (defendant a husky football player; the guy he hit a scrawny guy). We chose the one black man as the foreman — he was the oldest, and just projected wisdom, patience, or something that made him the clear choice for all of us. After discussion, five of us were convinced that the defendant was guilty, but one woman just couldn’t accept convicting a “nice young man who was a football player”. However, the foreman very gently and diplomatically convinced her that, as unpleasant as the conclusion was for her, the evidence was clear that the defendant was guilty, and she acknowledged that he was correct.

        • A search using the phrase “a statistically significant sample” (SSS) yielded 145 opinions and orders. Here are a few quotes from cases discussing SSS and deciding the correct SSS before any SSS-ing is done (because a storm rolled in here ruining my picnic):

          “Defendants contend that the Court should only require a statistically significant sample.” Plaintiffs wanted a really, really SSS.

          “Although challenged by defendants, the court finds the statistical random sample generated by [expert] to be an appropriate method of sampling for the purpose of projecting cash receipts over the entire period at issue where inadequate records have been provided and finds the particular sample to be statistically significant.”

          “A random and statistically significant sample is highly likely to accurately represent the putative class …”

          “Plaintiff asserts that a sample of 400 loans per population will provide a confidence level of approximately 95%, with a 5% margin of error. … The court does not find any prejudice in deciding the motion before it and allowing the use of statistically significant samples of the securitizations at issue.”

        • Oh boy, this is really disturbing. Maybe the ASA needs to issue a statement? And maybe law students need courses that discuss “common misunderstandings of statistical concepts that arise in law.”

        • Reading this is like accidentally kicking over a rock and exposing all the squirmy things under it. “Statistically significant sample”? Fisher wept.

        • My experience with forensics in construction was the same, everyone asked the question “how many sites do we have to visit in order to get a statistically significant sample” just routinely… this was then a cue for me to go into a long spiel about how you learn *something* from every sample and the only way to decide how big a sample to do was to balance costs and benefits and choose the optimal sample size

        • As a now retired (but still available) statistical expert witness, I can’t count how many times I’ve been asked this exact question by lawyers, or asked to provide opinions on a sampling regime that *has* to be one pass without any examination of the data. All the questions I have asked, like “What are you trying to measure? How accurate do you want to be? What expectation do you have about the (say) binomial parameter you’re trying to estimate? How about a two-stage analysis with an initial pilot sample?” etc, etc fall on not just deaf ears but impatient ones: “Look. Can you say that a sample of 1,000 is enough or not?” Partly, of course, that’s because neither side’s lawyers are interested in truth, but in maximizing their chances of winning.

          Best story: I was testifying about an analysis whose central result (it was a long time ago) was a p-value of 0.032. The lawyers wanted me to demonstrate why that was important to the jury. I eventually did something that I still think was pretty clever but my initial sally was to try and get the lawyers to put their money where their mouth is: run an actual experiment (using dice or a big spinning wheel) in which there was a 0.032 probability that a uniform random process yielded a result that far outlying or farther. The lawyers were horrified. “We can’t do that! What if if the 3 percent chance happens live?”

  3. The content of this post, and the one linked to in it, reminded me a lot of (recent discussions about) some scientists, and scientific fields. “Once you’re on the list, you’re pretty much set” is the title of the blogpost linked to above, and i feel this may pretty accurately describe what can, and possibly has, gone on concerning some scientists and associated topics of study (e.g. “priming”?).

    To me it sometimes looks like scientists can almost become invincible to any criticism, like is quoted above in the post “There’s an excellent case made here that Wolf’s career should have collapsed long ago under the weight of her contradictions and factual errors (…)”. And, it also seems to me that some scientists get way too much credit, money, resources without having really good ideas, or writing thoughtfully, etc.

    It’s like some sort of self-sustaining thing, where everybody plays their (unconscious?) part, and where the outcome is increasingly more removed from facts, solid reasoning, careful thought, etc. Perhaps this process describes some problematic issues and processes in social science/academia. I have thought, and mentioned before, that i think it could be the case that when a certain percentage of people in a field stop prioritizing facts, solid reasoning, careful thought, etc. that the entire field risks turning into sh#t. Maybe some of these people refer to what is quoted above in the post as “enablers”.

    The possible problematic thing might be that once there are too many “enablers” in a certain (sub-) field or (specific) topic AND when things like solid reasoning and fact don’t “count” anymore, it may become almost impossible to correct matters. I also ponder whether that in turn leads to the last few “good” people leaving the field, while “bad” people are being hired and stay in the field (e.g. because they don’t see a problem). Again, it may be some sort of self-sustaining thing.

    Assuming the above makes at least some sense, i find it interesting (and possibly useful) to think about how to then improve matters. I have thought about this, and i think:

    1) Ridiculing things may work for some reason (using sarcasm may work for some reason).

    2) Major “shocking” events could help open everyone’s eyes to the possible ridiculousness concerning the field, and (some of) the people in it (e.g. think Diederik Stapel and the subsequent “Levelt report”, and Daryl Bem’s work on ESP, etc.).

    3) Perhaps it could be useful to start a different version of the field, without any of the problematic processes and “bad” people of the current one. The possible contrast of the two fields could make problematic things clear to others, and may also attract the “good” people. The “bad” field, and “bad” people in it, might then simply wither away over time…

    There must be theories, studies, etc. about this kind of thing. Maybe it resembles parts, and/or processes, of cults in some way. If i am not mistaken, there are theories, and studies about that.

  4. Read the review of Wolf in the NYT, the same applies to statistical methodology, or science or whatever. She says “I know my books are true”.

    Endless nitpicking about methodology is completely missing the point. These people already know the answers. They don’t need to do anything correctly, or care about valid research. Science doesn’t matter until they think it backs them up when it becomes “The Science”, see climate change. If it contradicts them it’s “pseudoscience” or “neurosexism”, see sex differences.

    FFS, the NYT carried an opinion piece recently disputing the athletic benefits of testosterone. Bald, bare faced lies. Pure, unadulterated anti-biology. You can argue about p values or power or whatever with these people, you’re bringing a knife to a gunfight.

    “Throughout it all, she remains impervious to criticism. “I’m lucky,” she said in a recent profile in The Guardian. “I had a good education. I know my books are true.”

    Not accurate or factual, but true. This is a key to understanding why charges of sloppiness or misrepresentation don’t seem to stymie, or even embarrass, writers like Wolf (or Jared Diamond and Annie Jacobsen, who have both been involved in similar scandals in recent weeks, facing them with the same blithe indifference). The issue isn’t simply that publishers don’t spring for fact-checking and leave writers vulnerable to making such errors. These writers see themselves in service of something larger than grubby reporting. “The important thing is that these stories are told,” Wolf recently told The Times of London. They are the emissaries of great stories, suppressed stories, and if they take liberties or eschew careful research — as consistently as Wolf has done — it is because they believe they have a right to them, that the story, the cause, somehow sanctions it.”

      • Depressing.

        Here is the PhD thesis of one of the most vocal activists in the area of that NYT article, the one who gets into twitter spats with Martina Navratilova: https://uwspace.uwaterloo.ca/handle/10012/6619

        From the abstract:

        ” There’s a widespread conviction in the norms of assertion literature that an agent’s asserting something false merits criticism…. ,… I argue, whether one’s assertion is true or false is not strictly relevant to the normative evaluation of an assertion. What is relevant is whether the speaker has adequate supporting reasons for the assertion, and that the necessary conventional and pragmatic features are present…”

        • I know someone who talks about different people’s “truths”. Sounds really weird to me. As best I can figure out, she uses “truth” where I would say “beliefs””.

        • I think what is meant is when people get their informal posterior probabilities high enough they round it up to 100% and stop worrying about whether it’s false. But people have access to different information, and thérefore they can come to different conclusions on this basis… most people deal with this stuff at a heuristic level, and it’s objectively hard to change people’s mind once they settle into one of these ” truths “… so Bayesian type thinking plus excessive round off and lousy data and lousy models leads to different assertions of near certainty from one person to another.

          Recognizing this can happen can help undo the bad effects. Giving people a way out of the roundoff black hole can help them get to a more rational set of ideas

        • Nah, I think you’re being too much of a statistician about this. Testosterone has athletic benefits. It’s not round off error in posterior probabilities, it’s denying fundamental biological reality in the service of an ideology.

          I think Martha has it correct, “truth” = “beliefs”.

          Do you remember who popularized the phrase “her truth” in the US? Oprah, that’s who. Of course then it was usually about a plucky woman who was going to tell the audience about “her truth” fighting some injustice or obstacle. Usually some self help book writer or actress. Who cares?

          People started caring soon enough when it was anti-vaxxers wrapping themselves in the same protective shield. “Her truth” was that vaccines gave her child autism. Lived experience and all that.

          Suddenly, objective truth and reality was back in fashion. Same with feminism and trans activism. A couple of years ago Navratilova was shouting down anyone who pointed out Serena Williams wouldn’t even make minimum wage playing in the men’s game. Now she’s reading research papers on the effects of testosterone in puberty to defend the category of ‘female athlete’.

          The point I was trying to make with the thesis is that not everyone is arguing in good faith. Some have no problem lying or bullying, in the service of some higher ‘truth’. This truth can’t be tested with your tools, because your tools are part of the current hierarchy and thus are designed to uphold it.

          The evolutionary psychologists just seem to be idiots. It might seem unfair to dismiss an entire field but it genuinely appears like they learn about significance testing and head off into the world to discover facts.

          But their understanding of statistics is skin deep. They make every basic error and when challenged console themselves with the idea that they understand statistics and science, but their challengers don’t.

          I mean, an R^2 and residual plot is stats 101 right? But the guy won’t post them, yet he thinks he has the moral high ground. Those other people are just Luddites trying to drag him down.

          Their entire field seems to be like this. They proclaim ‘facts’ about intelligence and personality. These proclamations are always of the form ‘characteristic X is the best predictor of outcome Y’. Never any couching or doubt. Just point estimate + significance = fact.

          When you read the books though, all the correlations are about 0.3. That’s it. They don’t consider the range of this correlation, no sampling variation, no confidence intervals. It’s 0.3. Fact.

          I don’t particularly care about evolutionary psychology, but these people are the only ones arguing with the educational blank-slaters. So we’re all screwed either way.

        • Bob:

          This reminds me of some frustrating exchanges I’ve had with Steven Pinker. He recognizes that the blank slate position is ridiculous and that some aspects of evolutionary psychology have to be true, but then he jumps to thinking there must be something to any particular evolutionary psychology paper. In contrast, I think it’s possible for evolutionary psychologists to be studying phenomena that have some underlying truth, but their experiments can be so noisy as to be useless. For example, consider that study claiming that women were 20 percentage points more likely to support Barack Obama at certain times of the month. Political attitudes may well vary based on time of the month, but the real changes have to be much much less than 20 percentage points, and any real effect—in either direction—would be overwhelmed by bias and variance of noise. Evolutionary psychology being true has nothing to do with it. The problem is in the math, that you can’t reliably study an effect if your bias and variation are larger than the underlying effect being studied. In short, I think Pinker was arguing based on truth and I was arguing based on evidence.

        • Quote from above: “Suddenly, objective truth and reality was back in fashion. Same with feminism and trans activism. A couple of years ago Navratilova was shouting down anyone who pointed out Serena Williams wouldn’t even make minimum wage playing in the men’s game. Now she’s reading research papers on the effects of testosterone in puberty to defend the category of ‘female athlete’. ‘

          Navratilova seems to have recently been kicked out of a certain LGBT group for sports people because she said something like “it was a form of “cheating” for transgender women to be allowed to compete in women’s sport.”

          Perhaps it’s a sign that something might be going wrong when someone like Navratilova gets kicked out of a current LGBT group.

          https://edition.cnn.com/2019/02/20/tennis/martina-navratilova-dropped-lgbt-group-scli-spt-intl/index.html

        • It’s a little gobbledygook to me but I think it’s saying that if you have a good reason to believe a thing is true then this makes it ok to assert it’s truth, you don’t actually need to have either talked to God to verify that it’s true, or proven it logically from first principles… Thus I believe I’m posting to Andrew Gelman blog and not some hacked Russian man in the middle spy machine… if it turns out later that the Russians did hack Gelmans domain name server and insert their spy machine between me and Andrews real blog I shouldn’t be held as irresponsible just because I asserted an incorrect fact, if there was no reason to believe it was incorrect.

          What’s really problematic is when you take this norm to sanction basically assertions from ignorance. Failing to have even basic checks on the correctness of your assertions isn’t normatively ok. but at the same time not realizing hidden facts isn’t a moral failing.

        • Also relevant, I think, is the legal concept of a “credible witness” — but “credible” is in practice a very subjective thing. So many factors can influence someones judgment of whether or not someone else is “credible” (and, regrettably, sometimes people’s criterion for “credible” is “agrees with me”)

        • Similar with “evidence”. Anything can be “evidence” of anything. As long as you can convince a few people, it’s evidence! Once something is “evidence” then it’s a battle of whether it’s meaningful or not meaningful, or the degree to which it’s meaningful. Often with “evidence” you don’t even have to name it or assess it, it’s just “There is evidence that…”, so now that there’s “evidence” (meaningful or not, explanatory or not) the cause is suddenly justified.

        • Part of what’s going on here (both with “credible” and “evidence”) is dichotomizing rather than thinking in “shades of gray”. Some of us, at least, try to use (and promote use of) qualifications such as “some evidence,” “purported evidence,” “strong evidence,” etc. to try to get out of the “certainty” trap.

      • A twitter thread I saw recently, featuring evolutionary psychologists who seem to really like linear regression.

        Some guy fits a line to a bunch of categorical (self-reported) data. Decides it’s significant, and ‘the most robust predictor’, (without testing for robustness).

        A bunch of people ask to see the R^2 and residuals without success, he just posts his p-values over and over again. His results are ‘significant’, he LOLs at the idiots who don’t understand stats like him.

        He misunderstands a Bonferroni correction and doesn’t understand the criticism of ‘line fitting’.

        https://twitter.com/NicoleBarbaro/status/1134103469533618176

        But he’s doing ‘science’ and that’s what matters.

    • Wolf is engaged in a political project, and in politics, facts only kinda matter. As the rising star from New York Rep. Alexandria Ocasio-Cortez says, “I think that there’s a lot of people more concerned about being precisely, factually, and semantically correct than about being morally right.”

  5. @ Andrew For some reason I can’t post a reply to your post above, but I agree about Pinker. Taleb frequently posts on twitter about him, pointing out that you can’t compute averages of fat tailed distributions, because the LLN does not apply. This is a mathematically valid point, and he calls the approach ‘naive empiricism’. I’ve never seen Pinker or any of his defenders address this simple point though. I don’t understand his thought process, maybe he thinks he is so enlightened and progressive, that anyone who disputes his thesis is irrational.

    Funnily enough Taleb has a similar beef with the evolutionary psychologists, pointing out (correctly in my opinion) that they don’t understand correlation. Not one of them seemed to get the point he was making, not one understood the mathematics of what he was saying and could address it. I found that absolutely shocking.

    Fortunately for them, Taleb is the kind of person you can accuse of being ‘abusive’ (because he frequently is), so in the end they all blocked him and went on their merry way. But again, I don’t understand how they can be so sure of their methods and absolutely immune to anyone questioning their mathematics. They’ve just decided everyone else is irrational or ignorant.

    Like you say, a lot of the effects they study would be dominated by noise, but there’s no recognition of that. Just fit a regression line, result is significant, X predicts Y.

      • That review starts out ok, but as quite a few of the comments point out, the reviewer has just as simplistic a framework for analysis as Diamond does.

        He has a successful career because he sold a shedload of books. It’s not a conspiracy to signal boost a white man (Boo!) at the expense of non-white women, but he sold a lot of books. This gets you another book deal. Naomi Wolf feeds the bottom line, so she gets published.

  6. I’m sure they do at first, but not anymore where Diamond is concerned. Once he’s had a bestseller he’s bankable.

    These connections though, are they because he’s a white man, as the reviewer makes out, or because he has the right background? According to Wikipedia, his education is Harvard and Cambridge.

    Reading on though:

    “After graduation from Cambridge, Diamond returned to Harvard as a Junior Fellow until 1965, and, in 1968, became a professor of physiology at UCLA Medical School. While in his twenties he developed a second, parallel, career in ornithology and ecology, specialising in New Guinea and nearby islands. Later, in his fifties, Diamond developed a third career in environmental history and became a professor of geography at UCLA, his current position. He also teaches at LUISS Guido Carli in Rome. He won the National Medal of Science in 1999 and Westfield State University granted him an honorary doctorate in 2009.”

    So to be fair to the guy he’s done the work, three academic careers for crying out loud. His writing may not be to everyones taste but it’s not like he’s blagged his way to where he is. Maybe he got a leg up at the start, but remember it was at the expense of other white Jewish guys at Harvard, just like him. That’s what those who look at the ex-post demographic makeup of these positions ignore.

    If there’s alternative writers with similar breadth of education who could write interesting books I’d hope they get published too. But that reviewer doesn’t talk about anyone like that, he talks about writers who would represent other groups. That’s the criterion he’s using, Diamond is in the wrong group for his liking.

Leave a Reply

Your email address will not be published. Required fields are marked *