Skip to content
 

In research as in negotiation: Be willing to walk away, don’t paint yourself into a corner, leave no hostages to fortune

There’s a saying in negotiation that the most powerful asset is the ability to walk away from the deal.

Similarly, in science (or engineering, business decision making, etc.), you have to be willing to give up your favorite ideas. When I look at various embarrassing examples in science during the past decade, a common thread is researchers and their supporters not willing to throw in the towel.

Don’t get me wrong: failure is not the goal of a research program. Of course you want your idea to succeed, to flourish, and only to be replaced eventually with some improved version of itself. Similarly, you don’t go into a negotiation intending to walk away. But if you do research without the willingness to walk away if necessary, you’re playing science with one hand behind your back.

As with negotiation, your efforts to keep your theory alive, to defend and improve it, will be stronger if you are ultimately willing to walk away from your theory if necessary. If you know ahead of time that you’ll never give up, then you’ve painted yourself in a corner.

Look what happened with Marc Hauser, or Brian Wansink. They staked their reputations on their theories and their data. That was a mistake. When their data didn’t support their theories, they had to scramble, and it wasn’t pretty.

Look at what happened with Daryl Bem. He collected some data and fooled himself into thinking it represented strong evidence for his theory. When that didn’t work out—when careful examination revealed problems with his methods—he doubled down. Too bad. He could’ve preserved his theory to some extent by just recognizing that his experimental methods were too noisy to measure the ESP he was looking for. If he really wanted to make progress on the science of ESP, he’d have to think more about measurement. Bem’s defense of his scientific theory was too brittle. The front-line defense—defending not just the concept of ESP but the particular methods he used to study it and his particular method of data analyses—left him with no ability to go forward. We can learn from our mistakes, but only if we are willing to learn.

Look at what happened with Satoshi Kanazawa. He found some data and fooled himself into thinking it represented strong evidence for his theory. When that didn’t work out—when careful examination revealed problems with his methods—he doubled down. Too bad. His general theory of evolutionary psychology has to have some truth to it, but he won’t be able to learn anything about it with such noisy methods, any more than I could learn anything useful about special relativity by studying the trajectories of billiard balls. If Kanawawa were willing to walk away from his claims, he could have a chance of doing some useful research here. But instead he’s painted himself into a corner.

To continue with the analogy: Walking away should be an option, but it’s not the only option. A negotiator who walks away too soon can miss out on opportunity. And, similarly, as a scientist you should not abandon your pet theory too quickly. It’s your pet theory—nobody else is as motivated as you are to nurture it, so try your best. So, please, defend your theory—but, at the same time, be willing to walk away. It is only by recognizing that you might be wrong that you can make progress. Otherwise you’re tying yourself into knots, trying to give a deep explanation of every random number that crosses your path.

Tomorrow’s post: Stan saves Australians $20 billion

16 Comments

  1. Andre says:

    At what point have I gone too far with an idea? If it’s a piece of software, it’s not some theory that I’ve tricked myself into believing will work, it’s equipment that can benefit others research. It’s something that be completed, and people can use. Collecting more data won’t make the theory less reliable, because it’s a product.

    Regarding software, where’s the sweet spot, where I throw in the towel? I’ve got a lot of open projects that I don’t want to abandon. It makes me look bad for employment if I’m not able to finish many of the projects I’ve started. If it’s a learning experience for me, and it benefits others, and a good display of technical skills, then where’s the negative of eventually finishing an engineering project, especially if I enjoy it? Early in ones career, it makes one look incompetent if there’s not a common theme to their work.

    Let’s be honest – I’m not producing any research, I’ve only completed a few software projects. So I’m focused on building skills as an engineer. So when do I quit and move on to something else?

    I’m not seeing how the scientific anecdotes connect to engineering.

    • Andrew says:

      Andre:

      I’ve worked on various software projects, and my general advice on this one is to involve other people in the decision making. If there are other people who think it’s a good idea for a particular project to be pursued to completion, then (a) this can be an indication that the project might be worth doing, (b) this can provide you with social support to keep working on the project, and (c) maybe these other people will be interested in helping.

      The analogy between engineering and software projects isn’t bad, I think. With engineering, even once the project is completed, you don’t know that anyone’s gonna use it, and eventually it will be superseded anyway, hence two of the key goals of a software project are: (1) to get immediate value from it (which motivates “dogfood”-style projects that you can directly use yourself), and (2) to have impact in later projects (your product has some special aspects of functionality or design or implementation that others can copy and use in their projects). Similarly, just about every scientific research project has a limited lifespan, and so you want it to be useful in the short term while inspiring later developments (from yourself and others) in the medium and long term.

    • gec says:

      > it’s equipment that can benefit others research

      This is *precisely* what a scientific theory is meant to be. Admittedly, this is not communicated well in most science education, but these are some of the things that a theory is meant to do:
      1) Provide approximate descriptions of the causal processes operating in a particular domain.
      2) Enable communication by providing a set of shared terms and concepts.
      3) Make clear the link between a given situation and predictions of what will happen in that situation.
      4) Act as an “intuition pump” to generate new hypotheses and research questions.
      All of these things are functions that a theory is meant to provide to the scientific community at large, just like a piece of equipment or software. Indeed, the link is even tighter in fields that express theories in computational/mathematical language. Physical theories are often realized in FORTRAN code (which fewer and fewer people know how to read!) and I myself am a member of a vibrant community within cognitive science that expresses theories as computer code.

      And the reasons for abandoning a theory are actually quite analogous to why one would abandon a software project. Obviously, if it makes enough wrong predictions (is not just buggy, but logically broken) a theory should be abandoned. If areas where the theory is expected to generalize instead require new machinery be bolted on that only comes into play in specific scenarios, that is evidence that the theory/software needs to be redesigned or abandoned. If terms (functions) become so overloaded that they no longer provide a meaningful link between assumptions and predictions (the API becomes inscrutable) this is good reason to abandon the theory. If people are asking questions (trying to achieve functionality) that the theory doesn’t address, you’d better look for new theory. And if a more compact/elegant expression of the theory is found (a new software implementation that is more efficient without sacrificing generality) this is reason to chuck the old one.

      Beyond the striking resemblance between theory development and software development, I want to emphasize two things: First, that the description I’ve provided of the functions of theory and reasons for “walking away” are hardly exhaustive. Second, many of the reasons for abandoning a theory are not necessarily about how “true” the theory is, but about properties that make it “useful” as a tool for others.

      • I think a lot of people underestimate the importance of programming a theory into a formal algorithm for theory development. It’s EASY to fool yourself into thinking you have a theory, when in fact you only have pieces of a theory, or not much at all even.

        If a theory has been programmed into a formal language, it is a definite thing. Until then, it’s just some ideas.

  2. Norbert says:

    Another virtue of a scientist is to get the facts right. Oddly, Hauser addles some brains so his faults are made to fit into a narrative that is what the narrator expects rather than what actually happened. Hauser’s theories have not (yet) been shown to be faulty. Indeed his experiments, the ones he was harshly punished for, have replicated, so far as I know and I have followed this closely. If you have recent evidence showing that this is incorrect, perhaps you would like to share this with us.

    • Andrew says:

      Norbert:

      Three things that Marc Hauser, Daryl Bem, Brian Wansink, and your neighborhood astrologer have in common: Their theories have not been shown to be faulty.

      There’s a value in science for theory without experiment, indeed there’s value in theory that inspires experiments by others.

      The problem with Hauser’s papers was not that he presented theories. The problem was that he “fabricated data, manipulated experimental results, and published falsified findings” (in the words of the Department of Health and Human Services, as quoted by wikipedia). If he’d just presented his theories as theories without claiming that his experiments provided evidence for them, he’d’ve been fine. Or if he’d presented his theories as theories, along with the evidence that did not support the theories, that would’ve been fine too.

      When you say Hauser “addles some brains,” it might be more accurate to say that Hauser “fabricated data, manipulated experimental results, and published falsified findings.” Doesn’t quite have the same ring to it, but it’s more relevant to the point of how he ended up getting fired.

      But, sure, the fact that a fraudster was involved in a scientific hypothesis does not make that hypothesis faulty. That’s clear. If, tomorrow, it were revealed that some important physicist working on relativity theory had faked his data, we should not take that to imply that relativity theory is useless. Science is science; it lives beyond any individual.

      • Andrew says:

        P.S. Just to emphasize: I have no problem with Hauser’s claims replicating. My problem is with his handling of data, not with his theories. As I wrote of Wansink, it could well be that sloppy or even falsified data analysis could be performed in the service of valuable theories. Maybe Hauser had excellent qualitative understanding and was able to come up with excellent theories, and maybe it was just his statistical naivety that led him to expect every experiment to turn out just as predicted, which in turn motivated cheating. Recall the Lance Armstrong principle.

  3. KennethM says:

    ‘be willing to walk away’

    …but, but we have always been told that dedication, constancy of purpose, and hard work are the keys to success.

    (“Winners never Quit & Quitters never Win”)

    …just how does one know when to quit and walk away?

  4. Phil says:

    Coincidentally, a friend sent me the following just a few minutes before I read this post:
    https://getpocket.com/explore/item/the-data-that-threatened-to-break-physics?utm_source=pocket-newtab

    • Andrew says:

      Phil:

      I clicked on the link, and tried to read the article, but . . . aahhhh, I no longer have patience for this magazine-article style of writing, where the author gradually pulls you into the story, humanizes people by telling us irrelevant things like how they dress and what they sound like, then gradually leads us into a state of ambiguity and suspense, a state of disorientation akin to the start of a Rockford Files episode, and then gradually reveals the secrets, etc etc etc.

      I want to start out by knowing what happened. I want the basic story, then the details. I was scrolling through this damn story trying to figure out what happened, what they’re talking about, and I just gave up.

      • I love the Rockford Files, but I don’t have much patience for the writing either.

        As I remember it, there was basically some clock drift and some excess propagation delay and things so that there was a bias in the measurement of the time of arrival, so the neutrinos were traveling less than the speed of light.

      • Phil says:

        I just watched a Rockford Files episode a few days ago, for the first time in probably 35 years. I guess it was an episode from late in the series: Rockford and Becker are friends, they hang out. Weird. Still, it really took me back.

        I, too, skimmed through this article until I could tell what it was about. Here, I will cut out all the BS and just give paragraphs that tell the important part. Everything between the ==== lines is a direct excerpt from the article; … means I’ve cut a bunch of material. I will say, though, that by giving up on the article so early you did miss out on something. Most of the BS is at the start.

        ====
        The guy who is looking at the data calls me,” Ereditato tells me… “He says, ‘I see something strange.’ ” What he saw was evidence that neutrinos traveled through 454 miles of Earth’s crust, from Switzerland to Italy—which they are supposed to do—at such a high speed that they arrived 60.7 nanoseconds faster than light could travel that distance in outer space—which should have been impossible.

        I ask Ereditato if he thought it must have been a mistake. “I don’t think it’s fair to say this,” Ereditato tells me. “If we say that, we bias our analysis. So when we got this indication that something was so astonishing, the first reaction was, well, let’s find why this is so.”

        I think as any scientist, [I was] very, very, very skeptical from day one,” says Ereditato. “You make a check list: timing, receiver, GPS, transmitter from the receiver to the detector, … you check everything.” Some options were checked immediately, while others required them to wait. The CERN beam, for example, could not be stopped. In the meantime, Ereditato drove his team hard. “You could not imagine how I was handling this business with my colleagues—check this, check that, do this, do that, do this, let’s cut the chain, do it again, do it again—we did this from spring to September 23rd!”

        The team tried and tested every permutation of software, hardware, and theory that they could think of, and through every step, every bug they fixed, every increment of understanding they earned, the evidence for faster than light neutrinos stood as solid as the mountain above the experiment. Then, the inevitable happened: News of the data leaked. People outside the experiment started gossiping about a violation of relativity, a result that would rattle the foundation of physics like it hadn’t been rattled since 1900, when Max Planck discovered quantum physics. The rumors “spread at the speed of light,” Ereditato tells me.

        “And then what do you do? Think about yourself taking the position of spokesperson. Do you say: No, no comment? And then everyone will blame you, all journalists: ‘Oh you hide it. We want to know what is happening. We are taxpayers giving support to you, we have the right to know!’ Or you make a claim.” In a sinister voice, he adds: “I discovered the superluminal neutrinos.”

        OPERA announced its results on September 23rd, 2011 at a special seminar at CERN. The team did not state that it had observed a violation of relativity, and instead of using phrases like “evidence for” or “discovery of,” it called the data an “anomaly.” But that pivotal caveat was lost in the sensation of human interaction. While the conditional made it into The New York Times headline, “Tiny Neutrinos May Have Broken Cosmic Speed Limit,” it did not make an appearance in The Daily Telegraph (“CERN Scientists ‘Break the Speed of Light’ ”) or The Guardian (“Faster Than Light Particles Found, Claim Scientists”) or Scientific American (“Particles Found to Travel Faster Than Speed of Light.”)

        In one direction lay epic, ground-breaking physics—and in the other, potential embarrassment. Should OPERA have waited? How many more months could they have spent analyzing and reanalyzing the result? Leaning forward and pointing at me through the camera, Ereditato explains why a scientist can’t ignore a measurement just because it seems absurd. “You don’t kill this … nature is talking to us, not through theories, but through experimental results. The worst data are better than the best theory. If you look for reasonable results, you would never make a discovery, or at least you will never make an unexpected discovery. You only make—this is a contradiction in terms—an expected discovery.”

        The announcement got OPERA the help they had hoped for. A few days afterward, with the operators of the CNGS beam, they started developing a new approach to the measurement. The original analysis had to use a statistical technique to determine the neutrino’s arrival time because the beam was spread out in space. The new approach was to generate neutrinos in tight bunches so that they would arrive at the detector together, making it much easier to determine their arrival time.

        It took two months to reconfigure the neutrino beam, perform the experiment and analyze the results—unprecedented speed for an experiment of this complexity.

        The faster-than-light measurement was still there.

        The next step would be to seek independent confirmation outside of OPERA itself, which is common practice. The Higgs, for example, was observed by both the ATLAS and CMS experiments. But there were no other experiments that could confirm or deny OPERA for at least several years. There was, however, another experiment at the base of Gran Sasso, called the Large Volume Detector (LVD), that could at least check OPERA’s timing system. The idea was to make sure the clocks of each experiment were synchronized by comparing the arrival times of cosmic ray muons in their respective detectors.

        “This was really the killing experiment,” Ereditato tells me. Looking back through all five years of OPERA data, the teams found a period when OPERA’s timing was off by about 73 nanoseconds. Then another mistake was found with the timing circuit that affected the bunched beam experiment: The frequency of OPERA’s clock wasn’t locked to the timing of the bunches. The combination of the two problems accounted completely for the 60 nanosecond early arrival time of the CNGS muon neutrinos.

        ======

        The article goes on to explain why this mistake was so hard to find, etc.

        A few things about this. First, I ‘knew’ when I heard about these anomalous results that they were wrong, just like every other physicist in the world did. This whole thing didn’t seem like a big deal to me, just a routine experimental error that was eventually corrected. I was wrong about that.

        Second, I can’t fault the OPERA team at all in this. They had this weird result that “couldn’t be right” and they tried really really hard to fix it and they couldn’t find a problem. What are they supposed to do, decide to ignore it? There are examples (such as the Rutherford experiment, and the discovery of the muon) in which theoretically impossible or at least unexpected results turned out to be real.

        Third, as soon as the ‘killing experiment’ was done, it seems everyone immediately accepted the result (which they had never fully believed anyway). They had a timing problem, they didn’t know how and they had tried really really hard to make sure they didn’t, but they did, and they all accepted that immediately. But maybe that’s just because the experiments were so clean. If they had come around to really believing in the initial result, and if the experiment that contradicted it were less definitive somehow — I’m trying to make it analogous to some of these social science or psychology problems — perhaps some of them would have chosen sides and painted themselves into a corner, I dunno.

        • the thing that bothered me the most about this was how the team felt that it was embarrassing to say what their data does seem to say the fact that they couldn’t find a deep 60 nanosecond flaw in a timer is not reason to be embarrased. unlike the kind of thing we see in bad examples in this blog this was a perfect example of humbleness without a dramatic claim

  5. Wonks Anonymous says:

    This sounds a bit similar to Eliezer Yudkowsky’s advice to always have a line of retreat:
    https://www.lesswrong.com/posts/3XgYbghWruBMrPTAL/leave-a-line-of-retreat

Leave a Reply