Skip to content

“As a girl, she’d been very gullible, but she had always learned more that way.”

I keep thinking about the above quote, which is from the Lorrie Moore story, “Community Life.” I’ve read some Lorrie Moore from time to time, but I found out about this particular story by hearing it on the New Yorker fiction podcast (which I absolutely love, but that’s a topic for another post).

What struck me about the above quote was the idea that you can learn more from being gullible than from being guarded.

Or, to put it another way, that you can learn more from being open-minded than from being skeptical.

It reminded me of this quote from Steven Pinker that we’ve discussed before, supporting “a measured approach to scientific replication: Rigor, of course, but put a lid on the aggression & call off the social media hate mobs.”

I’ll get back to Pinker in a bit, but first let me continue on the theme of the benefits of gullibility or open-mindedness.

To start with, the idea that you can learn more from being open-minded than from being skeptical is one of those paradoxical-sounding statements that have the ring of truth. It reminds me of the advice they give in brainstorming sessions to just toss in ideas without filtering them. I’m a pretty skeptical person sometimes, and skepticism has its place—for example, I don’t think the government should be spending millions of dollars on unproven ideas such as ESP or the power of bottomless soup bowls. (Sure, all those Wansink claims could be true, but recall Daniel Davies’s principle that good ideas do not need lots of lies told about them in order to gain public acceptance.) Nor would I recommend spending any more taxpayer dollars or grad student hours on “power pose” or the much-debunked-and-so-ridiculous-it’s-sad-that-anyone-had-to-waste-any-time-debunking-it critical positivity ratio.

But those are specific cases. Thinking more generally, and operating under a veil of ignorance where we purposely don’t investigate these claims in detail, it can be appealing to keep a generally gullible attitude in order to let the thoughts flow more smoothly. Being credulous about the critical positivity ratio (according to one of its proponents, “Just as zero degrees Celsius is a special number in thermodynamics, the 3-to-1 positivity ratio may well be a magic number in human psychology”) might be silly in itself, but it could free your mind to come up with more interesting and actually true theories. Similarly for vaccine denialism or Holocaust denialism or flat-earth theories or magic magnets or subliminal smiley faces: these models of the world fall somewhere on the continua between silly, offensive, and dangerous, but to on purely intellectual grounds, there could be a benefit to entertaining the most ridiculous ideas, in the same way that an expert debater is supposed to be able to take any position on any issue.

So, for the sake of argument, let’s accept the view that we can learn more from being gullible (or, to put it more politely, open-minded), which is related the Chestertonian principle that extreme skepticism is a form of credulity, and let’s accept that instead of poking holes in statistical claims, we should frame everything positively.

I’m willing to consider that position. It’s not the position I’ve taken—I’m ok with saying negative things about other people’s published work, if I think that work is flawed, and I’m also ok with other people saying negative things about my work, indeed I’ve learned a lot from negative comments (for example these harsh comments which led to this work)—but I’m open to the idea that we should be doing things differently. Sure, the lack of negative feedback would slow down my own research progress and others’ too, but maybe it would be worth it for the countervailing gains.

The big probem with generic open-mindedness

The big problem with open-mindedness is to decide what to be open-minded about. A few years ago, the Journal of Personality and Social Psychology wanted us to be open-minded about a claim that Cornell students have ESP—but I’m guessing they wouldn’t have wanted to be open-minded about spoon bending, astrology, etc. Or maybe spoon bending and astrology, but not the flat earth and Bigfoot.

To return to Steven Pinker, in his above link he’s supporting a call to be open-minded about power pose and critical positivity, and elsewhere he and his friend Alan Dershowitz have recommended that we be open-minded about torture for terrorism suspects.

That’s his call—but then why does he not want to be open-minded about other controversial scientific positions such as “blank slate” theory or creationism, or other controversial policies, such as, I dunno, torture for white-collar criminal suspects or sex traffickers?

In Pinker’s memorable words:

Perhaps you can feel your blood pressure rise as you read these questions. Perhaps you are appalled that people can so much as think such things. Perhaps you think less of me for bringing them up. These are dangerous ideas — ideas that are denounced not because they are self-evidently false, nor because they advocate harmful action, but because they are thought to corrode the prevailing moral order.

My point here . . .

I’m not arguing that all theories are equal. My point is that open-mindedness exists only in context: you have to decide what to be open-minded about.

To return to the example of “brainstorming”: In a brainstorming session, we agree to share ideas without criticism—but this only goes for ideas submitted from people inside the room. Steven Pinker wants to be open to ideas submitted from “inside the room” of Harvard or of various circles of pundits—but only some sorts of pundits. Pinker’s open to the pundits who say that cops should be allowed to torture terrorism suspects, but not those who would torture embezzlers or sex traffickers.

Again, at some level, that’s fine. I’ll read just about every email that’s sent to me, and I’ll respond to all sorts of blog comments—but some ideas are expressed so incoherently, or are so far out there, that I’m not going to bother with them. That’s unavoidable.

Let me also emphasize that the boundaries of acceptability, for any person, are fuzzy. There’s no way that any of us can precisely lay out exactly what ideas we’re willing to support, even without any good theory or evidence, and what ideas just tick us off. Pinker’s ok with the positivity ratio but not with blank slate; conversely, I’m fine with people studying and thinking about ideas that , but racism ticks me off. That doesn’t make me “right” and Pinker “wrong”; each of us is just willing to be open minded about different things.

I don’t think complete open-mindedness is possible. Indeed, I’m pretty sure it’s impossible, for reasons analogous to Russell’s paradox. For example, it’s hard to simultaneously be open-minded about an attack on critics of bad statistics in science, while being open-minded about the proposition that some areas of science have become saturated by bad work, in part because of a clubby unwillingness to accept criticism of bad work.

Again, I can accept that Pinker and the other defenders of open-mindedness have a legitimate position, even if I don’t agree with it. Maybe the nitpickers such as myself really are doing net harm (see also here), and maybe we’d be better off “sticking to sports,” as it were. It’s possible.

My point here is only that open-mindedness is relative to what we’ve decided to be open-minded about, and who we’ve decided to let into the room. Is everything published in Psychological Science or PNAS considered to be above any harsh criticism? What about lower-tier journals? Arxiv papers? Blog posts? Work that hasn’t been endorsed by an Ivy League professor? Similarly when considering what police tactics to be open-minded about. There are not easy questions.


  1. Dale Lehman says:

    This was a strange post – it sounded like you were defending being gullible but ended up at the opposite end. I agree with your conclusions. To put it somewhat differently, we don’t need to be gullible in order to be open-minded. We can be skeptical, and at the same time, listen to alternative views and reconsider our positions. I’m not saying it is easy, but I can’t imagine the alternative.

  2. Andrew,

    I get it! I get it! He he I just tweeted something that resonants with your blog commentary.

  3. jim says:

    I agree with the general gist but I think:

    open-mindedness does not equal gullibility,
    nor is skepticism precluded by open-mindedness

    open-minded:”receptive to arguments or ideas”
    gullible:”easily duped or cheated”

    “Open-Minded” is to be willing to read a new paper about something surprising, unexpected, or unintuitive and understand the nature of the evidence and/or arguments before judging it.

    “Gullibility” is to uncritically accept the arguments.

    One can reject arguments and theories that:
    have been made and failed a million times (flat earth, ESP);
    are obviously ridiculous (refilling soup bowl, critical positivity ratio);
    made by the same old people who have long sacrificed their credibility (Trump);
    and still be open-minded.

    • Terry says:

      open-mindedness does not equal gullibility,
      nor is skepticism precluded by open-mindedness

      Came here to say this.

      This is an example of “The Slide,” where you draw the reader in claiming to talk about A, then slide to actually talking about B.

  4. Random Stranger says:

    Brainstorming has been shown over and over again, not to work. Spoils the argument somewhat

  5. LemmusLemmus says:

    Philosopher Russel Hardin made a related argument: It’s better to start out too trusting than showing too little trust. This way you get a better chance to calibrate your level of trust to reach the optimal level.

    Of course, that requires that you get valid feedback on whether your trust was warranted.

    In the end, there’s a cost-benefit calculation, like always. If you trust Ted Bundy, you’re going to learn something, but you’re not going to have a lot of time to enjoy it.

  6. Michael Nelson says:

    I think your post doesn’t take into account a key phrase in the passage: “As a girl….” I don’t know the context for the quote (haven’t read the story), but it seems to me that the narrator may be saying that in whatever culture/period the story is set, for whatever age she describes as “a girl,” a girl’s ignorance is often preserved by social norms in the name of shielding her from corruption. She can’t learn the way most boys do, by asking direct questions and getting direct answers. In fact, a lot of what boys learn about sex or fighting or other potential taboos, they are taught by fathers or brothers without having to ask, because the man believes these are things a boy must learn to become a man.

    But “as a girl,” she has to find back doors to knowledge, this one being the realization that her enthusiastic and non-judgmental belief encourages people–especially of the opposite sex–to share the kind of information from which she has been shielded. They may be half-truths, but in a world where the only way you can learn anything about sex is to listen to a boy complain that he could die from “blue balls,” at least now she knows considerably more about male genitalia than her parents or teachers will tell her. And hopefully, with this malformed foundation, she now will be able to ask more pointed questions of trustworthy adults, who in turn may be more willing to share “dangerous truths” in order to dispel more dangerous falsehoods.

    All of which is to say: in a world where most of us have access to direct forms of inquiry, back doors to knowledge are generally unnecessary. Pinker seems to feel that it is anti-intellectual for society to shield us from ideas that do not meet a moral or technical threshold, and I agree–I don’t need to be protected from ideas–but we have better ways to explore those ideas than listening to one-sided advocacy for torture or investing millions in lame research ideas. We can, instead, analyze both pros and cons of torture, and we can provide limited funds for exploratory research and only publish the results with correct analyses and appropriately conservative claims.

    • Terry says:

      At first, this struck me too. I thought the quote was saying that she was gullible because she was a girl. Then I realized it was just her way of saying “when I was younger”.

      But, your post shows it could be interesting to take it more literally. You give a standard feminist girls-as-victims spin to it, but can it be read differently? Specifically, girls are much more socially subtle and savvy and so are able to feign gullibility and agreeableness to gain power in social interactions. To put it another way, girls tend to be much more knowledgeable and multi-layered about social issues, and girls’ education (via their female kin often) is much deeper on these subjects. This has the benefit of being consistent with a division-of-labor viewpoint that girls specialize more in human/social skills while boys specialize more in more non-human and non-social skills.

      • Michael Nelson says:

        I did consider the idea that the narrator was saying that “as a girl” she had an advantage of being able to glean more than a boy would by using this particular tactic, which I think is consistent with your “different read.” By analogy, you might say that Andrew “as a blogger” has a tool for critical inquiry outside of the journal and conference system, one that can be much more effective than being published in a journal’s letters column. The influence he gains through his “savvy” use of social media, however, does not seem to be worth the cost of his views being confined to his blog and a few other venues: He has reported many cases when his or others’ methodological criticism has been refused a voice by journal editors, as well as cases of exaggerated and personal insults made against advocates for scientific rigor, often from very influential scientists.

        I also wouldn’t call methodological critics victims, or the defense of their academic and scientific freedoms feminist. All roles, even roles of power, are at times both empowering and limiting. It may well be that Andrew has greater influence, and has become a better advocate, because of the limitations imposed on him by those in more powerful roles. But surely a person in a role with less clout in the community deserves the right to choose, on its own merits and not as a social work-around, their own strategy for science communication.

        My point is about gate-keeping: scientists *are* specialists in accumulating knowledge, so there’s no need for Pinker or the like to tell us that we should be less aggressive in our criticism or more tolerant of certain one-sided arguments. Critics of social science reform are oddly prescriptive in their calls for unrestricted discourse. Just as some people are awfully limiting in their depiction of women’s externally-defined roles as actually granting them greater freedom.

  7. Terry says:

    The multi-armed-bandit problem seems relevant here.

    In probability theory, the multi-armed bandit problem (sometimes called the K-[1] or N-armed bandit problem[2]) is a problem in which a fixed limited set of resources must be allocated between competing (alternative) choices in a way that maximizes their expected gain, when each choice’s properties are only partially known at the time of allocation, and may become better understood as time passes or by allocating resources to the choice.[3][4] This is a classic reinforcement learning problem that exemplifies the exploration–exploitation tradeoff dilemma. The name comes from imagining a gambler at a row of slot machines (sometimes known as “one-armed bandits”), who has to decide which machines to play, how many times to play each machine and in which order to play them, and whether to continue with the current machine or try a different machine.[5]

  8. D Kane says:

    > Similarly for vaccine denialism or Holocaust denialism or flat-earth theories or magic magnets or subliminal smiley faces

    Thanks for not including climate skeptics in this list. Much appreciated!

  9. Ethan Bolker says:

    Interesting on this topic, perhaps not relevant for this particular posting.

    Peter Elbow: The Doubting Game and the Believing Game

    • Peter Dorman says:

      Thanks for this reference. I had heard the expression “the believing game before” and wondered where it came from. I suspect most readers of this blog would have little difficulty in tearing Peter Elbow’s argument apart; it has lots of unsupported claims. For me, its most significant failing is conflating the experiential (or positional) basis of belief with truth value. It is valuable to try to see things from someone else’s point of view to understand why they believe some things and not others, but this has almost nothing to do with the validity of those beliefs. Elbow’s error here is widespread among the “every positionality has its own truth” crowd. Of course, this essay is adapted from stuff he wrote long before the positionality doctrine became widespread; he was truly a thought leader. Leading in the wrong direction though.

      • Ethan Bolker says:


        Fair comments. Elbow wrote this when the way to teach students to write was changing from a prescriptive model to one more open to allowing students develop a writing voice: criticism should help the writer say what s/he means, which requires understanding what’s meant when the words are murky. That requires believing that the writer really has something to say.

        The essay was not meant to suggest the believing mode as a possible route to discovering whether what’s written is true. That’s what this thread is about, which is why I headed my comment “perhaps not relevant for this particular posting”.

    • jim says:

      My thought on this was that we don’t need to be believers or skeptics. Ideally we approach a problem from neutrality and devise an impartial test to compare the competing solutions and verify which solution is most successful solving a problem.

      Also most “skeptics” aren’t skeptical of everything. They apply skepticism selectively, based on experience, to certain claims, types of claims, modes of investigation, individual investigators or what have you, that have proved worthy of skepticism in the past. So contrasting “believers” vs. “skeptics” is kind of a strawman because most people are both. Believing and skepticism aren’t ways of thinking, they are instead ways of applying experience and knowledge.

    • Terry says:

      Thanks for the link. Very interesting.

      I think he has it backwards though. Gullibility is the default for most people and skepticism needs to be learned.

      Maybe the paper is applicable to people with a scientific bent. But those people are rare.

  10. Martha (Smith) says:

    “What struck me about the above quote was the idea that you can learn more from being gullible than from being guarded.

    Or, to put it another way, that you can learn more from being open-minded than from being skeptical.”

    Speaking as an old woman who was once a gullible girl:

    I don’t think gullible is the same as open-minded, nor do I think that being guarded is the same as being skeptical:

    I see gullible as being more trusting than open-minded.

    I also see “guarded” as more like this definition on from the web “cautious; careful; prudent: to be guarded in one’s speech. protected, watched, or restrained, as by a guard,” than as skeptical.

    • Terry says:

      “I don’t think gullible is the same as open-minded, nor do I think that being guarded is the same as being skeptical:”

      Stand in line sister! Jim beat us both to this point. See Jim’s comment above.

  11. zbicyclist says:

    I’ve found it useful to flip back and forth on ideas — from being “too open” to being “too skeptical”, and points in between. I think you can learn more that way.

    • Martha (Smith) says:

      I often follow the pattern,
      “A, but on the other hand B, and on the third hand, C”

      • jim says:

        Maybe you are a “superforecaster” ! :)

        Have you read Phil Tetlock’s book “Superforecasting: The Art and Science of Prediction”? Apparently “superforecasters” – people who have a much better than average forecasting record in a measured forecasting environment – use *many* hands when balancing evidence.

        • Martha (Smith) says:

          I doubt that I’m a “superforecaster”. I tend not to forecast much — except possibly predicting what a person know well will do in a particular situation. (Or perhaps things like predicting that at least one student in a large class will do X, despite my repeated admonitions against doing X.)

  12. digithead says:

    “I believe virtually everything I read, and I think that is what makes me more of a selective human than someone who doesn’t believe anything.” – David St. Hubbins

  13. Kien says:

    Hi, I suggest we need to both: (i) keep an open mind (“continuous learning”), while also (ii) make up our own minds (“independent thinking”). We need to cultivate both!

  14. paul alper says:

    The following by Robert Todd Carrol is from

    “Tooth Fairy science” is an expression coined by Harriet Hall, M.D., (aka the SkepDoc) to refer to doing research on a phenomenon before establishing that the phenomenon exists. Tooth Fairy science is part of a larger domain that might be called Fairy Tale science: research that aims to confirm a farfetched story believed by millions of scientifically innocent minds. Fairy Tale science uses research data to explain things that haven’t been proven to have actually happened. Fairy Tale scientists mistakenly think that if they have collected data that is consistent with their hypothesis, then they have collected data that confirms their hypothesis. Tooth Fairy science seeks explanations for things before establishing that those things actually exist. For example:

    You could measure how much money the Tooth Fairy leaves under the pillow, whether she leaves more cash for the first or last tooth, whether the payoff is greater if you leave the tooth in a plastic baggie versus wrapped in Kleenex. You can get all kinds of good data that is reproducible and statistically significant. Yes, you have learned something. But you haven’t learned what you think you’ve learned, because you haven’t bothered to establish whether the Tooth Fairy really exists.*

    Furthermore, there may be a simpler, more plausible explanation for your data. (Most readers will not find it arduous to devise an explanation for those gifts that have replaced teeth that were placed under a pillow.)

    From the WSJ Sunday of February 26, 2012:
    The average gift from the Tooth Fairy dropped to $2.10 last year, down 42 cents from $2.52 in 2010, according to no less an authority than the Original Tooth Fairy Poll, which is sponsored by Delta Dental Plans Association.
    “But the good news,” their PR folks hasten to add, “is she’s still visiting nearly 90% of homes throughout the United States.”
    Some other data points:
    The most common amount left under the pillow by the Tooth Fairy is $1.
    Most children find more money under the pillow for their first lost baby tooth.

  15. Jordan Anaya says:

    I hadn’t seen that zero degrees Celsius quote. Does she mean zero degrees Kelvin? The only thing special about zero Celsius is it’s water’s freezing point, but I don’t know why that would be more important than water’s boiling point (100 degrees Celsius).

  16. Renzo Alves says:

    Williams James put it like this: If your goal is to believe as many true statements as possible, maximize by believing everything. If your goal is to believe as few false statements as possible, minimize by doubting everything. The options are not limited to these, of course.

  17. Thomas says:

    This sounds like the classic sensitivity/specificity issue in diagnostic testing. It’s a trade-off, there isn’t a single rule to select the best threshold for “positivity”. One useful component of context is whether or not you have additional means of confirmation. If you do, you would set the threshold low, maximize sensitivity, at the cost of low specificity (e.g., mammography screening for breast cancer, or D-dimer tests for pulmonary embolism). If it’s the final test, the false positives (and associated costs) will be of greater concern. Maybe one shouldn’t reject wacky ideas too early, only after some “further testing”.
    (rereading this Im thinking “duh”, sorry…)

  18. Bill Spight says:

    I do not see open-mindedness and skepticism as opposites. To me, being open-minded requires a certain skepticism about one’s own beliefs.

  19. Mark says:

    My initial reading of the quote was along the lines of learning more from mistakes than from successes and gullibility leading to mistakes.

    But on the broader point, I think the question of whether it’s better to be open-minded or skeptical toward new ideas is just poorly formed. Obviously, you want to strike the right balance. On the other hand, it’s entirely reasonable to argue that most people would benefit from being more open-minded about most things. That may or may not be a winning argument, but it pretty easily addresses Andrew’s “What about astrology?” objection.

  20. Sean Matthews says:

    You should keep an open mind, but not one so open your brain falls out.

  21. Christopher Blanchard says:

    Well. There is a special kind of gullibility which is vital for statisticians and the like, when you work with strange tribes, like psychologists, with strange beliefs and entrenched social systems, only part evidential, for validating them. This isn’t a new problem, but the discipline which has done most to confront this kind of difficulty is anthropology, so it might offer useful insights.

    This is all a bit serious for a response to a light hearted thread, but a lot of your discussions about scientific fallibility are in older posts, so if I went there no-one would read my bits. My excuse is that I only recently picked up this blog, but it has been interesting enough that I have read back (big parts) over several years. Anyway, I hope it fits.

    Please see if this resonates.

    I ought to be a bit careful. There is an awkwardness about this because it might seem as if I am somehow denigrating statistical methods. I am not, my point is about choice of research methods. Anthropology can be, and sometimes is, very data intensive, especially when informed by ecology, some techniques in linguistics, or archaeology, and the problems you and your commentators describe definitely apply to those fields (see for example, the blog at Dynamic Ecology . Even so.

    The inspiration for this is the career and attitudes of Franz Boas He gets called overblown things like “The father of American Anthropology”, but there is an important point in there. Boaz trained as a physicist (doctorate in 1881 from Kiel, on the Colour of Water), and turned to geography, and then anthropology, at a time when anthropology was dominated by theories. His Privatdozent, from Berlin, was in geography and he became head of a newly created department of anthropology at Clark University, Massachusetts, and then on. The theories he had to deal with included hereditary race differences, with demonstrable physical indicators to prove them, and theories which explained cultural difference as demonstrating a progesessive evolutionary process leading upwards to a north European and North American summit. Boas’ responses included detailed technical work on skull collections and the like, to demonstrate the falsity of some of the theories and, more importantly, in ways which developed through his life, to argue that the business of anthropology was to look and listen, so as to learn as much as possible about what really happens in the cultures being studied. He was, above all, an empiricist, but of a special kind, in that he believed (if I don’t parody his ideas) that his discipline wasn’t sufficiently developed, didn’t know enough and hadn’t developed adequate methods of observation, so that theories, especially grand ones, just couldn’t be supported. He was, if you like, a special kind of gullible: wide open to experience, and refusing the mental restrictions of his peers.

    That approach has been tremendously influential (I ought to say that I am not an anthropologist, so better informed people ought to tell me when I am wrong). That is not to say anthropologists don’t have theories (Boas did, despite his fundamental hostility) – everybody does, even if they are subliminal and unacknowledged, but that the discipline is intensely self aware, with a profound emphasis on the skills involved in looking at and listening to other people. This gets parodied as “naive empiricism” but it isn’t. There is a modern (ish) approach to archaeology, called “Post Processual”, which seems to me to have imported or re-invented a lot of Boaz’ approach in that discipline as well, and there are probably others I don’t know about.

    That isn’t to say there aren’t horrible difficulties with it. Some of the most contentious originate in anthropology but show up in sociological studies of science, and I am sure you have come across them. The difficulty is that the anthropologist or sociologist, and, I would guess, the consulting statistician, has to start a working relationship by accepting (in complicated ways) all the weird things they hear, whether they are about which god made the world, how the blacks are taking our jobs or how hearing about goats will prime you into eating more cheese. ‘Accepting’, is tricky, and isn’t the same as gullibility, but you know that, and you know that they can look to be much the same. To make it work needs subtlety and a great deal of care, and when people take a superficial attitude we end up with the idiocies which promoted the ‘science wars’ of a few years ago.

    The point here is two layered, in that first, psychology and economics, as disciplines, haven’t, mostly, learnt Boas’ lessons, so that the propositions which go into experiments, and then into statistical analyses are too ill formed, too ‘theory-laden’, to be fully credible. And second, that means outsiders looking at these disciplines have to treat their beliefs and rituals with an anthropological slant – ‘accept’, in that strange way, but don’t validate them without putting in the outsider’s work. Much as you might investigate a group of people who have seen a miracle with statues moving in response to prayer – you have to start from a kind of constrained respect for these people, but you don’t join in.

    One way of putting this would be to say that the characteristic noise in the data sets you look at is not just a product of complicated events, but is generated by noise in the heads of the people writing the papers, so you have to, somehow, take that into account.

    Many of the things you and your commentators have written about (over years now), including chronic and horrible misunderstandings about causality, make sense to me as the product of disciplines which don’t train for the kinds of self knowledge we see in the best anthropologists. I don’t mean it doesn’t happen, of course not: I see it in some kinds of psychotherapy, with some psychiatrists and clinical psychologists, with Jesuit priests, with a minority of social workers, and in other places, even though I don’t believe most of their theories. What all these people have, in practice, when they are good, is fine grained, cautious and open minded empiricism, which is that special kind of gullibility, and, ultimately, their theories are secondary to what they really do.

    Appropriate use of statistical methods goes with the grain of this approach. To clarify with an invented example. If you want to model the relationship between the numbers of salmon in some northern river and unemployment pay in the area you can probably get at the causality which justifies your model, but if you want to model salmon numbers and ‘happiness’ in the area, I doubt you can do better than the kind of detailed description we owe to Franz Boas. Inventing happiness indexes just might be useful, but once you start that way you risk losing track of reality and are headed straight towards specious inferences.

    I hope this isn’t too confused. If there is any point to it I suppose the thought might extend to other consulting professionals besides statisticians, and if there isn’t then not.

Leave a Reply