30 thoughts on “My review of Duncan Watts’s book, “Everything is Obvious (once you know the answer)”

  1. In your review, you said, “Don Rubin used to tell us that there’s no such thing as a “paradox”: once you fully understand a phenomenon, it should not seem paradoxical any more.”

    That reminds me of how irritating I used to find “paradoxes” — I didn’t see what the problem with them was that made people think they were somehow strange or hard to accept. I was delighted when I found out that “paradox” literally means “counter to orthodoxy”.

    • Also, “obvious” reminds me of reading of advice that mathematician Wilhelm Magnus gave about reading mathematical papers: Look first at places where the author says “obvious”, since those are most likely to be places where there is a mistake.

      • Martha:

        This is related to the advice that I once heard for writing, to omit all uses of the following words and phrases:
        very
        quite
        note that
        interestingly
        obviously
        of course.

        There are a few more I’m forgetting right now, and there are of course exceptions where such phrases are appropriate. But when I search in papers written by myself and my collaborators, almost always these phrases can be omitted, adding clarity in expression. A word like “quite or “very” is not quantitative; rather, it can be a substitute for quantitative thinking. Similarly, “of course” leaves open the all-important question of why it is “of course,” and “obvious” is a substitute for explanation. Finally, “Note that” and “Interestingly” raise the question of why all the other sentences of your paper are, apparently, neither interesting nor noteworthy.

  2. > Trust me [Andrew] on statistics, don’t trust me on physics.
    Because of your intuition or a clear sense of your areas of ignorance or knowing when obvious is not obvious or your willingness to seek out and engage with severe criticism?

    My guess would be the later (or should I say hope as even I can do that).

    Of course, note that obviously some parts of my question are very interestingly quite literally based on Andrew’s comments in past links ;-)

  3. “Everything is Obvious has blurbs from [various] and . . . Alan Alda. Alan Alda?? I thought it was impressive when we got Nassim Taleb to blurb Red State Blue State. But Alan Alda in on the next level.”

    I can’t tell whether the above was meant as sarcasm, but if not — I think you got the more impressive blurb. Alda is just an actor; why should I care about his opinions? Taleb, on the other hand, is someone worth paying attention to.

      • Regarding Alan Alda’s connection to Duncan Watts. Here is a passage from Sync by Steven Strogatz.
        “On a quiet afternoon in the spring of 1994, I was sitting in my office at MIT, immersed in a calculation, when a ringing phone dragged me back from the depths. “This is Jean calling from Alan Alda’s office. Will you hold for a call from Mr. Alda?”
        A few seconds later I heard that familiar voice. “Hello, this is Alan Alda. I don’t know if you know me, I’m an actor.”
        “Yes?” I was dumbfounded.
        “I just read your Scientific American article about synchronization, and I’d like to come talk to you about it.””

        Watts is a student of Strogatz.

  4. In conditional defense of the “that’s obvious” reaction:

    If a study concludes that “we used to think X, but we now know Y,” whereas in fact we have long been aware of a combination of X and Y, then I expect the study to shed light on the combination. If it does not, I find its conclusions obvious and not informative.

    A possible example (I say “possible” because I can’t judge from an op-ed) is Martin P. Seligman and John Tierney’s op-ed, “We Aren’t Built to Live in the Moment.” It discusses Seligman’s (and many others’) research on the role of “prospective thinking” in human life. According to Seligman and colleagues, we think *primarily* in terms of the future; this explains even depression: ” Therapists are exploring new ways to treat depression now that they see it as primarily not because of past traumas and present stresses but because of skewed visions of what lies ahead.”

    The findings are interesting enough–but the piece strikes me as hyperbolic. First of all, we have not ignored prospective thinking until now. Second, while prospective thinking may indeed deserve more attention and research, it does not explain quite as much as the authors claim:

    “But it is increasingly clear that the mind is mainly drawn to the future, not driven by the past. Behavior, memory and perception can’t be understood without appreciating the central role of prospection. We learn not by storing static records but by continually retouching memories and imagining future possibilities. Our brain sees the world not by processing every pixel in a scene but by focusing on the unexpected.”

    Here the authors set up false oppositions: between “storing static records” and “continually retouching memories and imagining future possibilities” (two quite different things) and between “processing every pixel in a scene” and “focusing on the unexpected.” This very way of framing the issue seems to ignore the possibility of a continual interaction between perception, retrospection, and prediction.

    So, back to the point: when told that humans are future-thinkers, I respond, “that’s obvious.” Here I mean that I want to know more. Of course prospective thinking plays a large role in our lives, but how does it combine with other kinds of thinking? How does one kind of thinking sometimes assume the guise of another? What problems might arise from viewing prospective thinking as *the* key to human behavior?

    • “Therapists are exploring new ways to treat depression now that they see it as primarily not because of past traumas and present stresses but because of skewed visions of what lies ahead.”

      DS: The findings are interesting enough…

      GS: Actually, it is just more mentalistic nonsense that doesn’t explain anything. Even if some aspect of behavior (here the issue is “depression”) is due to what a person thinks, nothing is gained by stating this, for we would have explain why the person has the thoughts he or she does, and to do that we will eventually have to turn to the relevant past environments (that relevant to natural selection, that relevant to cultural selection, and that responsible for much of the ontogeny of behavior) and current environment. A natural science of behavior can and should formulate the relation between the relevant environments and behavior and not interpose often made-up nonsense to give the illusion of contiguous causation. “Sure, past history is important, but it shapes the mind or brain and that is the real cause and real focus of a natural science.” That kind of thinking is precisely why mainstream psychology – and the fields it has corrupted (like much of neuroscience) – is such a pile of crap.

      • Thank you for the comment. Also, the article refers to numerous studies that have little to do with each other. For example:

        “The central role of prospection has emerged in recent studies of both conscious and unconscious mental processes, like one in Chicago that pinged nearly 500 adults during the day to record their immediate thoughts and moods.” [This study appears in a special issue of Review of General Psychology guest-edited by Baumeister and Vos and dedicated to the science of prospection.]

        “Studies have shown depressed people are distinguished from the norm by their tendency to imagine fewer positive scenarios while overestimating future risks.”

        “It turned out that even the behaviorists’ rats, far from being creatures of habit, paid special attention to unexpected novelties because that was how they learned to avoid punishment and win rewards.”

        “Perhaps the most remarkable evidence comes from recent brain imaging research. … Researchers have found that the same circuitry is activated when people imagine a novel scene [as when they recall a past event]. Once again, the hippocampus combines three kinds of records (what, when and where), but this time it scrambles the information to create something new.”

        There are still more–but even these references are all over the place.

    • “But it is increasingly clear that the mind is mainly drawn to the future, not driven by the past.”

      This sounds dubious — in particular, talking about “the mind” rather than individual peoples minds: why should different people’s minds all be the same way, whether “mainly drawn to the future, not driven by the past,” or by other combinations of focus on past, present, and future? My default assumption is that different people have different “time-frame” orientations — that is, spend different proportions of time on past, present, and future. And perhaps some people can focus on only one of these at a time, whereas others routinely focus on some combination of two at a time, and others routinely focus on all three?

  5. A few threads back I wrote a comment suggesting that people use common sense to make decisions when there is no good research. A reader replied that common sense is often wrong and that our senses frequently give us bad information. He recommended Duncan’s book to explain to me just how dumb I am because of my senses. Sure.

    I don’t buy the argument that our senses have us all messed up and if we’d just pay more attention to the results of X research we’d all be better off. Frequently – whatever that means – we’d be worse off. Research does develop some real answers to real problems. But research results are overturned on a daily basis. It happens in all branches of science, so it’s not just social science, although I’d say social science is extremely vulnerable to implicit, unrecognized assumptions that, once recognized, can destroy entire branches of research.

    There’s a reason our senses really do work most of the time: 4-friggin’-billion years of evolution. That’s a lot more than a century of social science – literally *gazillions* of experiments have been performed to refine our senses. They’re not perfect, for sure, because the world is a competitive place. But they’re pretty good.

    • “A few threads back I wrote a comment suggesting that people use common sense to make decisions when there is no good research. A reader replied that common sense is often wrong and that our senses frequently give us bad information. ”

      “Common sense” is a phrase that seems artificial (or at least subject to different interpretations) to me. In particular, different people may have different notions of “common sense”.

      “There’s a reason our senses really do work most of the time … evolution”
      This suggests what I consider an overly simplistic view of evolution. I think of evolution as “survival of the fit enough to have survived so far”, and refers to groups, not individuals. So some members of the group may have “senses” that work well, others may have senses that work poorly, but as long as the group survives, then — well, the group survives. Individuals may not survive, or they may survive simply because people with better
      “senses” have had a strong enough influence to carry along some of those with poorer sense.

  6. I’m not sure what you think is shown by what you have pointed to – particularly the bit supposedly relevant to “behaviorism.” Oh…BTW, “habit” was not a term used by Skinnerians, and it is really only Skinnerian behaviorists that remain. Anyway, the “rat data” (if we can call the casual comment that) perfectly makes my point…certain rats aren’t different because they happen to pay attention to novel stimuli, they are different AND they pay attention to novel stimuli BECAUSE of the contingencies (aspects of the environment that constrain behavior-consequence relations) pointed to by the phrase “the behaviorists’ rats…paid special attention to unexpected novelties because that was how they learned to avoid punishment and win rewards.” [The rat’s “paying attention,” BTW, is itself behavior.] So…the rats “paid attention” to novel stimuli AND they did whatever it was that was supposed to show they were “paying attention” to novel stimuli (what was actually measured…probably pressing a lever or something) BECAUSE of the CONTINGENCIES. Like I said, I can’t tell exactly what you are driving at, but you seem to be arguing for the notion that, roughly speaking, “people are depressed because of the thoughts that they have.” But how do you show that having certain thoughts, and the other “symptoms of depression,” are not both functions of the same variables – variables of the sort that I am referring to?

    • I assume you are replying to my comment. am not supporting the article or its arguments at all; I meant to convey skepticism instead. By “all over the place” I meant disparate, unconnected, inconclusive in themselves and together, etc.

      • My initial comment was about your endorsement of Seligman’s notion that (apparently the symptoms of) depression was, well, caused by the thoughts a person has. My second comment was concerning the things you pointed to (the “rat data” among them). I didn’t, and still don’t, understand what you are driving at, or how it relates to your endorsement of Seligman’s notion.

        • I was initially responding to Duncan’s annoyance with people who, upon reading of the latest social science research, respond, “that’s obvious.” My point is that such a reaction is legitimate when said research fails to contribute insights (regarding the research questions, the methodology, or both). I brought up Seligman’s op-ed as an example of summarized research that (at least within these confines) *fails* to illuminate. I was trying to be cautious and tactful, since I realize that an op-ed has limits. Maybe my effort at tact caused the confusion.

          I didn’t endorse Seligman’s notion. I said, “The findings are interesting enough–but the piece strikes me as hyperbolic.” I thought the second part of the sentence indicated skepticism. I certainly didn’t mean “interesting” as any kind of substantive endorsement. Perhaps instead of saying “The findings are interesting enough,” I should have said, “The findings, if tenable, might be interesting.”

          My point is that it’s no surprise–i.e., it’s “obvious”–that we think a lot about the future and that our thoughts about the future shape many aspects of our lives. It isn’t a new discovery. So it isn’t enough to hear this. I would want to know more about the relationships between past, present, and future in human thought.

          In particular, I am not impressed by the revelation that depression involves frequent thoughts about the future. I would want to know how such thoughts relate to thoughts about the present and past–and to a person’s health, circumstances, etc. Also, as you point out, there are different aspects to the present and past, each of which needs consideration.

          Instead of substance, I found some hyperbolic claims–e.g., that we think *mainly* in terms of the future–along with some scattered and unconnected references to research. My second comment was meant to illustrate just how disparate those references were. I did not mean “all over the place” as a compliment.

        • “it’s no surprise–i.e., it’s “obvious””

          I don’t see “it’s no surprise” as the same as “it’s obvious.” For example, one can have a prior belief that “either A or B or some combination of them might cause C,” so a result purporting to show that “A causes C” (or one showing that “B causes C”, or one showing that “A certain combination of A and B causes C”) would be “no surprise,” but would not be “obvious”.

        • Martha:

          There’s also a related mathematical point about the difficulty of talking about surprises, which is that something has to happen. I think this came up a few months ago on the blog. I don’t remember the context, but the basic idea is that suppose there are four possible outcomes: A, B, C, and D. And the probabilities of each happening are 20%, 22%, 28%, and 30%, respectively. Then we wait and see what happens. Whatever happens won’t be a “surprise” (as it had at least a 20% chance of happening a priori) but it would not be “obvious” either (as its chance was no more than 30% a priori). Start increasing the number of options and this sort of thing can be happening all the time.

        • DS: My point is that it’s no surprise–i.e., it’s “obvious”–that we think a lot about the future and that our thoughts about the future shape many aspects of our lives. It isn’t a new discovery.

          GS: This is precisely what I was criticizing – the notion that “thinking explains behavior.” Even where some behavior (in this case thought, the “penultimate behavior”) must occur before the ultimate “caused” response can occur, the thinking is a function of the sorts of variables that are obscured by saying that “thought causes behavior” – the thought, and the behavior to which it leads, are BOTH functions of the variables that must be understood – and which are ignored by mainstream psychology – by a natural science of behavior. So…my main point is not that Seligman’s notion is incomplete but is worthy of pursuit but, rather, that it reflects everything that is wrong with mainstream psychology. Thoughts explain nothing until you explain the thoughts and when you do that, you will have your hands on the real independent-variables.

        • Ah, I see your point: that where I saw Seligman’s notion as incomplete (but possibly promising), you see it as flawed to the core. Thinking and behavior, you point out, are functions of independent and dependent variables; mainstream psychology obscures this complex reality by asserting that thinking causes behavior.

Leave a Reply to Glen M. Sizemore Cancel reply

Your email address will not be published. Required fields are marked *