This new book, written by two psychology researchers, is an excellent counterpart to Lying for Money by economist Dan Davies, a book that came out a few years ago but which we happened to have discussed recently here. Both books are about fraud.
Davies gives an economics perspective, asking what are the conditions under which large frauds will succeed, and he focuses on the motivations of the fraudsters: often they can’t get off the fraud treadmill once they’re on it. In contrast, Simons and Chabris focus on the people who get fooled by frauds; the authors explain how it is that otherwise sensible people can fall for pitches that are, in retrospect, ridiculous. The two books are complementary, one focusing on supply and one on demand.
My earlier post was titled “Cheating in science, sports, journalism, business, and art: How do they differ?” Nobody’s Fool had examples from all those fields, and when they told stories that I’d heard before, their telling was clear and reasonable. When a book touches on topics where the reader is an expert, it’s a good thing when it gets it right. I only wish that Simons and Chabris had spent some discussing the similarities and differences of cheating in these different areas. As it is, they mix in stories from different domains, which makes sense from a psychology perspective of the mark (if you’re fooled, you’re fooled) but gives less of a sense of how the different frauds work.
For the rest of this review I’ll get into some different interesting issues that arose in the book.
Predictability. On p.48, Simons and Chabris write, “we need to ask ourselves a somewhat paradoxical question: ‘Did I predict this?’ If the answer is ‘Yes, this is exactly what I expected,’ that’s a good sign that you need to check more, not less.” I see what they’re saying here: if a claim is too good to be true, maybe it’s literally too good to be true.
On the other hand, think of all the junk science that sells itself on how paradoxical it is. There’s the whole Freakonomics contrarianism thing. The whole point of contrarianism is that you’re selling people on things that were not expected. If a claim is incredible, maybe it’s literally incredible. Unicorns are beautiful, but unicorns don’t exist.
Fixed mindsets. From p.61 and p.88, “editors and reviewers often treat the first published study on a topic as ‘correct’ and ascribe weaker or contradictory results in later studies to methodological flaws or incompetence. . . . Whether an article has been peer-reviewed is often treated as a bright line that divides the preliminary and dubious from the reliable and true.” Yup.
There’s also something else, which the authors bring up up in the book: challenging an existing belief can be costly. It creates motivations for people to attack you directly; also it seems to me that the standards for criticism of published papers are often much higher than for getting the original work accepted for published in the first place. Remember what happened to the people who squealed on Lance Armstrong? He attacked them. Or that Holocaust denier who sued his critic? The kind of person who is unethical enough to cheat could also be unethical enough to abuse the legal system.
This is a big deal. Yes, it’s easy to get fooled. And it’s even easier to get fooled when there are social and legal structures that can make it difficult for frauds to publicly be revealed.
Ask more questions. This is a good piece of advice, a really important point that I’d never thought about until reading this book. Here it is: “When something seems improbable, that should prompt you to investigate by asking more questions [emphasis in the original]. These can be literal questions . . . or they can be asked implicitly.”
Such a good point. Like so many statisticians, I obsess on the data in front of me and don’t spend enough time thinking about gathering new data. Even something as a simulation experiment is new data.
Unfortunately, when it comes to potential scientific misconduct, I don’t usually like asking people direct questions—the interaction is just too socially awkward for me. I will ask open questions, or observe behavior, but that’s not quite the same thing. And asking direct questions would be even more difficult in a setting where I thought that actual fraud was involved. I’m just more comfortable on the outside, working with public information. This is not to disagree with the authors’ advice to ask questions, just a note that doing so can be difficult.
The fine print. On p.120, they write, “Complacent investors sometimes fail to check whether the fine print in an offering matches the much shorter executive summary.” This happens in science too! Remember the supposedly “long-term” study that actually lasted only three days? Or the paper whose abstract concluded, “That a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful has real-world, actionable implications,” even though the study itself had no data whatsoever on people “becoming more powerful”? Often the title has things that aren’t in the abstract, and the abstract has things that aren’t in the paper. That’s a big deal considering: (a) presumably many many more people read the title than the abstract, and many many more people read the abstract than the paper, (b) often the paper is paywalled so that all you can easily access are the title and abstract.
The dog ate my data. From p.123: “Many of the frauds that we have studied involved a mysterious, untimely, or convenient disappearance of evidence.” Mary Rosh! I’m also reminded of Dan Davies’s famous quote, “Good ideas do not need lots of lies told about them in order to gain acceptance.”
The butterfly effect. I agree with Simons and Chabris to be wary of so-called butterfly effects: “According to the popular science cliché, a butterfly flapping its wings in Brazil can cause a tornado in Texas.” I just want to clarify one thing which we discuss further in our paper on the piranha problem. As John Cook wrote in 2018:
The butterfly effect is the semi-serious claim that a butterfly flapping its wings can cause a tornado half way around the world. It’s a poetic way of saying that some systems show sensitive dependence on initial conditions, that the slightest change now can make an enormous difference later. . . . The lesson that many people draw from their first exposure to complex systems is that there are high leverage points, if only you can find them and manipulate them. They want to insert a butterfly to at just the right time and place to bring about a desired outcome.
But, Cook explains, that idea is wrong. Actually:
Instead, we should humbly evaluate to what extent it is possible to steer complex systems at all. . . . The most effective intervention may not come from tweaking the inputs but from changing the structure of the system.
The point is that, to the extent the butterfly effect is a real thing, the point is that small interventions can very occasionally have large and unpredictable results. This is pretty much the opposite of junk social science of the “priming” or “nudge” variety—for example, the claim that flashing a subliminal smiley face on a computer screen will induce large changes in attitudes toward immigration—which posit reliable and consistent effects from such treatments. That is: if you really take the butterfly idea seriously, you should disbelieve studies that purport to demonstrate those sorts of bank-shot claims about the world.
Clarke’s Law
One more thing.
In his book, Davies talks about fraud in business. There’s not a completely sharp line dividing fraud from generally acceptable sharp business practices; still, business cheating seems like a clear enough topic that it can make sense to write a book about “Lying for Money,” as Davies puts it.
As discussed above, Simons and Chabris talk about people being fooled by fraud in business but also in science, art, and other domains. In science in particular, it seems to me that being fooled by fraud is a minor issue compared to the much larger problem of people being fooled by bad science. Recall Clarke’s law: Any sufficiently crappy research is indistinguishable from fraud.
Here’s the point: Simons and Chabris focus on the people being fooled rather than the people running the con. That’s good. It’s my general impression that conmen are kind of boring as people. Their distinguishing feature is a lack of scruple. Kind of like when we talk about findings that are big if true. And once you’re focusing on people being fooled, there’s no reason to restrict yourself to fraud. You can be just as well fooled by research that is not fraudulent, just incompetent. Indeed, it can be easier to be fooled by junk science that isn’t fraudulent, because various checks for fraud won’t find the problem. That’s why I wrote that the real problem of that nudge meta-analysis is not that it includes 12 papers by noted fraudsters; it’s the GIGO of it all. You know that saying, The easiest person to fool is yourself?
In summary, “How do we get fooled and how can we avoid getting fooled in the future?”, is a worthy topic for a book, and Simons and Chabris did an excellent job. The next step is to recognize that “getting fooled” does not require a conman on the other side. To put it another way, not every mark corresponds to a con. In science, we should be worried about being fooled by honest but bad work, as well as looking out for envelope pushers, shady operators, and out-and-out cheats.
While we’re at it, I suspect that you’d enjoy reading “The Enigma of Reason,” by Mercier and Sperber. You may have already seen it mentioned with Henry Farrell at Crooked Timber reviewed an earlier post of yours: https://crookedtimber.org/2020/07/24/in-praise-of-negativity/
Two takeaways from the book that are relevant here: first, that reason evolved not to discover the truth, but rather to convince others of our ideas; and second, that we’re terrible judges of the weaknesses of our own arguments, but great judges of the weaknesses of others’, so one of the best ways to strengthen our own arguments is to engage in open, honest, good-faith debate with others.
“to the extent the butterfly effect is a real thing…”
Has the “butterfly effect” ever been shown to exist? Ever? The famous example – butterfly wing flapping produces tornado – is Freakonomics BS: A surprising and unexpected conclusion that makes for a great story, but surprising and unexpected because it’s certainly impossible and frankly ludicrous. It’s extremely doubtful that small effects ever self-amplify under any conditions except when by chance they coincide with many other small effects – which then negates the causal relationship between any one effect and the ultimate outcome.
I understand ‘the butterfly effect’, in the very, very abstract, to be the observation that in a symplectic manifold, under very common circumstances, two arbitrarily close points will diverge arbitrarily under the associated group action (! – see Vladimir Arnold).
In other words, aribtrarily small differences in initial conditions result in arbitrarily large divergences in final conditions. The flap of a butterfly wing, e.g., represents the arbitrarily small difference for the global weather system. it does not mean that if you intervene at the right point, then arbitrarily small interventions will have arbitrarily large predictable effects. Quite the opposite.
Yes. It cannot be literally demonstrated in the atmosphere because we can’t run a parallel experiment with and without a butterfly. But it has been demonstrated mathematically and numerically all the way from Lorenz’s three-variable set of coupled differential equations representing tropical convection to state-of-the-art weather simulation and prediction models, at least for perturbations of arbitrarily small amplitude that are spatially broad enough to be represented in the model.
Another example of small effects self-amplifying: The other day, I left a bottle of soda in the freezer for a few hours, then took it out and had a sip. It turned out that the soda was supercooled, and my sip introduced one or more tiny particles that could form nucleation sites for ice (or else the disturbance from my sip allowed a tiny bit of ice to nucleate spontaneously). I watched as the entire bottle froze in about twenty seconds, wondering what would happen if the liquid already in my gut were to do the same.
See Cat’s Cradle.
The term “complex system” implies some level of stability, otherwise we have trouble considering it a “system.”
The butterfly effect is actually a statement about system stability and should only be considered in that context. Sensitive dependence on initial conditions implies instability.
This cat video is a lot better metaphor for the butterfly effect than any aspect of the climate, which exhibits all the characteristics of a highly damped system:
https://youtu.be/6ldtN_EEtOE
“my sip introduced one or more tiny particles that could form nucleation sites for ice”
That’s not a “self amplifying” effect AFAIK, nor is it chaotic. Your supercooled liquid was metastable. You added a catalyst that drove the reaction / change of state – a predictable outcome.
“at least for perturbations of arbitrarily small amplitude that are spatially broad enough to be represented in the model. ”
What’s the minimum scale for that? 100m? 1km? even 100m is many orders of magnitude larger than the amplitude of the wavelength of the wind from a butterfly’s wings. Maybe a Permian dragonfly could have done it…
John N-G:
Another problem I have with the supposed “butterfly effect”: there are millions of butterflies – what happens when they’re all flapping their wings at once? Which butterfly is responsible for “the” effect?
Aside from the scale issues, what kind of natural effect could be analogous to your “single perturbation” in an atmospheric model? The problem is the real world is full of billions and billions and billions of effects of similar orders of magnitude.
Chipmunk, the scale representable in the model could be several km, but that’s only because we don’t have ridiculously large supercomputers. The fact is that the equations used in weather has the behavior that it diverges according to some lyapunov exponent. A butterfly flapping its wings certainly changes the weather 1 month from now. But not in some reliable predictable way. Still, if you had two identical worlds and one had a butterfly flap and one didn’t, 1 month later the weather would be different everywhere on the globe. Not necessarily crazily so. The fact is it has a bounded set of possibilities, so it can’t get arbitrarily far apart. A butterfly flapping its wings will not result 1 week from now in the boiling away of the oceans, but it will result in things like a different pattern of waves breaking on the pacific coast, a different overnight temperature in Texas, a different pattern of rainfall in India, different swirls to the turbulence at 30000 feet etc.
additional info for chipmunk:
The butterfly effect is about the *detailed* state of a system, it is not about *every statistic* of that system.
If you could follow the position and momentum of every atom in the earth and its atmosphere, do you deny that with vs without the butterfly flap some of those positions would diverge drastically from where they otherwise would have been? Perhaps one molecule of water vapor would swirl around and be trapped by a tree and become part of a carbohydrate complex and be turned into wood in the no-butterfly world, while after the theoretical butterfly flap instead it would rise high up into the atmosphere, form part of a cloud and rain out in a different state where it would be ingested by a turtle and become part of one of its eggs.
That’s really what the butterfly effect is. The statement about “causing a hurricane in China” or whatever is not wrong per se but mischaracterizes the kind of thing that the effect is really about.
Would you consider something like Path Dependence in the formation of cities (as used in Urban Economics) to fall under this category? It seems like it’s similar, but the overall system isn’t quite as sensitive
The butterfly doesn’t cause the tornado. It alters the trajectory of the climate system (through whatever n-dimensional space) by a miniscule amount.
There is nothing bizarre or surprising about this. Go to a large field and walk straight ahead for 30 minutes. Then have a friend start from the same spot but turn to the left a few degrees. See how far apart you are in the end.
Have you ever tried to find out?
https://en.wikipedia.org/wiki/Double_pendulum
Somebody:
Great recommendation. That’s a really fun article!
Somebody:
Can you distinguish the important features of concept? The important feature of the butterfly effect is that it self-magnifies by many orders of magnitude, not hat it is chaotic or sensitive to initial conditions.
That’s incorrect. Whatever you think you mean by “self-magnifies by many orders of magnitude”, and I’m pretty sure you don’t know what you mean, the butterfly effect is typically understood to be about chaos and sensitivity to initial conditions.
For example, from the post above
From wikipedia
From merriam-webster
From the first link on Google after wikipedia
“Can you distinguish the important features of concept?”
I’m 100% with “somebody” here.
The aspect of “self magnifying” is that the *difference* in the trajectory of the system grows exponentially *for some period of time*. That’s the Lyapunov exponent. It’s not that it grows exponentially forever, since the total energy of the system is more or less bounded that’s impossible. The point isn’t “the butterfly flapping its wings is like a supernova in slow motion” it’s that “the stuff that happens becomes more and more different growing exponentially in time while it’s still small differences. Eventually that growth in time of the magnitude of the differences disappears because to get arbitrarily exponentially far apart forever molecules would have to start speeding at light speed out of the solar system.
Look at the trajectories in the example here:
https://en.wikipedia.org/wiki/Double_pendulum#/media/File:Demonstrating_Chaos_with_a_Double_Pendulum.gif
The tips of the pendulums all start very close together in space, eventually by somewhere in the range 7 to 9 seconds they are on the order of about as far apart from each other as its possible for them to be within the constraints of the system.
That’s what is meant by butterfly effect by anyone who actually studies dynamical systems. There may be some wrong view of what it is in popular discourse though.
The important thing to remember is that a butterfly flapping its wings can also prevent a tornado, in the strictly scientific sense of counterfactual causality. People seem to draw some kind of connection between the action of the wings turning into wind like a runaway nuclear reaction, and assume some kind proximal causation. The butterfly only causes the tornado in the same sense as the following
https://youtube.com/watch?v=dakx97gRCx0&feature=sharea
Somebody: perfect analogy. The flap of the butterfly changes the direction of certain molecules, that changes the fluttering of certain leaves, which causes a cricket to jump off a leave, a bird sees the cricket and dives to grab it, which causes a cat to pounce at it, which scares up a flock of crows, which all move air around differently which alters the evaporation rate of water, which cools the tree leaves which opens their stomata which increases the humidity of the surrounding area, which alters the rate of thermal updraft which causes a cloud to form earlier, which shields a certain range of ground from the sun which … eventually causes very different weather in India.
I have no problem with the concept of the “butterfly effect” as described by Daniel and somebody. Indeed, amplification is not essential.
Nevertheless, pretty much every claim made about the damn butterfly is wrong. Let’s start with the double pendulum. What would happen if I tried it here on earth? I would get a very brief period of chaotic behavior, in which the three pendulums deviate by increasing amounts, as shown on wikipedia. But soon the amplitude would decay and they would go through a phase in which the divergence were declining instead of increasing, until finally they would come to rest at the bottom, all in the same place just like they began. The butterfly operates in a damped system just like the pendulums.
Weather is indeed best modeled as a chaotic system but as a dissipative one. It is created by the atmosphere shearing across the land surface, plus vast amounts of air piling up against mountain ranges, pressure gradient winds, convection and conduction off the ocean, and other macro scale phenomena. It has nothing to do with butterfly wings, that’s just a bad metaphor. Dissipative chaotic systems are very difficult to embody in simple metaphors!
Matt, the atmosphere may be dissipative in terms of its kinetic energy but it’s also continuously forced by energy input from the sun, the total kinetic energy in the atmosphere is to zeroth order approximately constant plus seasonal effects, to first order probably trending upwards due to “climate change”. The damped oscillator which is continuously perturbed by a random input from a stepper motor would likely be a better example. I think you’ll find even if the random stepper motors or precisely synchronized (connected to the same random number generator) that the damped pendulums will go “forever” and be very different in their trajectory for almost all time (they’ll briefly come close to the same trajectory “randomly” but the “butterfly effect” will again cause them to diverge).
“This is pretty much the opposite of junk social science of the “priming” or “nudge” variety—for example, the claim that flashing a subliminal smiley face on a computer screen will induce large changes in attitudes toward immigration—which posit reliable and consistent effects from such treatments.”
Reliable and consistent effects, some of which only manifest under highly specific conditions. Here I’m referring to the attempts by nudgers to explain away replication failures due to things like minor variations in the experimental setup. This is a huge red flag that something is off. Call it the Uri Geller Principle. Geller claimed to be have some general telekinetic ability, but for some reason this ability only ever worked when the task was bending spoons (and maybe a few other carefully selected objects).