Why she teaches her students about scientific failure

Jennifer Lanni writes:

With class about to start, I print 14 Western blot images for my students to discuss. The 3-hour lab is supposed to be the culmination of a weekslong research project in my undergraduate biology course, the day my students determine whether their experimental results support their carefully crafted hypotheses. But the images are all the same—and all full of nothing but background bands. My students are about to have a hard lesson in scientific failure and how to be resilient in the face of it. . . .

Here’s what happened:

On the day of data analysis, I [Lanni] handed them the Western blot printouts and asked them to look over the images and discuss their findings. Most assumed their blots were correct—that the background bands they saw represented the proteins they had hoped to detect—and jumped immediately to interpreting the data. But I refused to let the students move on.

After a solid hour of struggle and some leading questions on my part, one student finally spoke up. “It doesn’t make sense. The bands look the same size, but the proteins should be different sizes.” Hallelujah! A student had stepped back from seeing what they expected to see and described what the data actually showed. Their breakthrough helped their classmates start to look at the results with more objective eyes. Within minutes, they were overflowing with questions and ideas about what could have gone wrong. We spent the next 2 hours covering the chalkboard with plans to troubleshoot the experimental procedures. My students were thinking like scientists—a development no amount of advance planning could have created.

That last bit doesn’t sound quite right, given that the entire activity was created by advance planning! But I get the point.

Lanni continues:

Afterward, I reflected on how we train future scientists. Should we talk more openly with students about failure? When I quietly left research, frustrated at what felt like my lack of accomplishment, was this a typical experience? How often do we inadvertently discourage students from persisting in science, simply by omitting honest descriptions of the failure inherent to the research process? Research is messy and full of failed attempts. Trying to protect students from that reality does them a disservice.

This reminds me of our discussion of how the examples you see in the textbooks are not representative of the sorts of problems you see in the real world.

23 thoughts on “Why she teaches her students about scientific failure

  1. I suspect an important variable here is timescale. It sounds like there’s not an easy way to go redo these biology experiments. In most undergrad labs I’ve been in (not all) you could start over in a day and probably be alright, though they have no shortages of failures (X doesn’t work, Y broke, Z works randomly, etc. etc.).

  2. Not directly related, but I can’t resist the chance to repeat this anecdote.

    Years ago Marguerite Lehr, a colleague of mine at Bryn Mawr College, told me of a conversation she’d had years before that with Oscar Zariski, a brilliant algebraic geometer then at Johns Hopkins. She told him about a failed attempt to solve a particular problem. He said “you must publish this.” She asked why, since it had failed. He replied that it was a natural way to attack the problem and people should know that it wouldn’t work.

  3. This is an excellent essay. I’m not sure about this, though: “How often do we inadvertently discourage students from persisting in science, simply by omitting honest descriptions of the failure inherent to the research process?” I think just as often, our omission of the difficulty of actually doing science encourages *too many* people to pursue advanced degrees. At least in physics, for which most undergraduate education involves lots of “clean” problems, many students go to graduate school expecting it to be more of the same, and the realization that research is fundamentally different is shocking.

    • Raghu:

      The flip side is that undergraduate science is so clean that it can be unappealing. I didn’t go to grad school in math because they never told me about applied math, and pure theory just seemed boring except for the really cool problems like how to prove that the binary expansion of sqrt(2) had an equal rate of 1’s and 0’s and I had no idea how to go about proving that. I was lucky to find applied statistics.

    • I think this may vary by subject. I came up through the ranks in psychology and quickly learned that research can be difficult or just weird.

      Second year, two weeks into a lab project on operant conditioning we realized the Skinner Box was defective and we have no idea what the real reinforcement schedule is; fourth year, spend hours sitting in a car in the dead of winter at a shopping mall counting cars, etc.

      A friend (female) doing a research practicum at the local high security prison had a passing guard leap between her and her (male) interviewee and hustle the inmate out.

      And then there was the study started out with 20 pigeons and ended with 19. The lead investigator said they had no idea what happened to the missing one. Now-a-days I’d assume a hungry grad student.

      We started learning a bit about the joys of research early on though it was not til grad scholl and after did one hit the nastier and messer problems.

  4. “It doesn’t make sense. The bands look the same size, but the proteins should be different sizes.”

    This is an odd way of describing western blots. The size of the protein determines the location of the band, not the size of it. Perhaps it is just strange phrasing.

    Is there a picture of what the students were shown?

      • What do you think you are contributing?

        The location of the band is determined by molecular weight.

        The size of the band is determined (very roughly) by how much protein is present.

        “The bands look the same size” maybe refers to them migrating the same distance, but the “size of the band” already has a different meaning. I’d definitely have to double check the student understood if they said that.

        • The provided link represents how western blot analysis is colloquially described. It was meant to help contribute to your understanding of the OP because you just have some problems with interpreting what people write because you can’t abstract information very well.

          In western blot, the “size” of the band is denoted by kDa. It’s colloquially described (refer to link I provided) as the size of the protein that has been detected, and is the scale on which the proteins are separated in a Western blot. These differently numbered bands represent different proteins. Yes, technically kilodalton is a mass.

          Per OP, the student observed bands of the same size (kDa), reflecting the same background bands/proteins. The proteins should have been different sizes (migrated to other kDa regions) on the blot. Hence the comment: ““It doesn’t make sense. The bands look the same size, but the proteins should be different sizes.”

          Incidentally, I don’t think you are contributing to anything in the conversation, because you are completely ignoring the messaging embedded in the OP (the value of teaching failure, not shoehorning observations to fit a particular hypothesis, etc) and instead chose to nitpick how a professor’s student describes Western Blot results.

        • Nah, you’ve never actually talked to many people about blots if you think “look the same size” sounds normal.

          “*At* the same size” is something people would say though.

          It would be interesting if the student came to the right answer due to a completely wrong understanding. But it probably is just odd phrasing.

        • Anon:

          Enough! Side discussions can be fun but this is getting ridiculous, it’s just too bio-technical for our blog and it’s irrelevant to the points being discussed here.

  5. This makes me think of an interesting book I read a year or two ago: Visual Intelligence, by Amy Herman. It has a big focus on being able to describe and summarize what is in front of you while trying not to jump to conclusions about what it means. My only complaint about the book is that (at least in the version I read) the artworks are reproduced at too small a scale. There are some nice exercises in which you’re supposed to look at a work and record what you see about it. It’s very much in line with the OP.

    Here’s an interesting old article about Herman teaching these skills to cops, an application that is also discussed in the book. https://www.smithsonianmag.com/arts-culture/teaching-cops-to-see-138500635/

  6. One of the things that interests me in this story is that failure was a collective experience — all students in the class failed to get results and the classroom provided a forum where they could see that and discuss it. This could have gone quite differently if the experience of failure had been private one — I suspect based on my on research that many would have inferred that they simply didn’t have the “hands” needed to do bench work and perhaps weren’t cut out to be scientists: https://osf.io/preprints/metaarxiv/h37kw/

    • My original impression was that Lanni engineered the failure for the learning experience. However, after reading her article, that seems wrong. In her article here’s no explanation or discussion of why the experiment failed. Lanni says only that “my class never did generate data to test their hypotheses”.

      She also says: “we finished the semester reading about and discussing scientific failure.”

      Huh? That seems pretty weird!

      If your experiment failed, normally it’s because you did something wrong. When you get bad data, you go through your prep and analysis process and figure out what happened, then you set up ways to isolate the failure. When you find it, you set up a procedure to avoid it. **That** is a scientific learning experience. I’m not sure what “discussing scientific failure” means, so I can’t tell if it’s a “learning experience” but if it doesn’t lead to you figuring out where you messed up, it doesn’t sound useful.

      Second, normally – especially with a group of undergrads – you don’t just go charging off into space testing wild ideas, you stick with published methods – so you can figure out what went wrong when you failed. So, if your work was so far off the mark that, even though you followed an established analytical procedure to the “T”, you drew a blank, you probably don’t understand what you’re doing at all.

      It’s too bad there’s not more info on this because the story has strange twinkles.

      • Yeah, it’s hard to follow because at least in my read most of the story is about getting the students to recognize that the experiment was a “failure,” and the actual discussion around failure is summarized quite quickly. I think what’s going on in the first part of the story is that students didn’t have strong enough background expectations about what the results should have looked like for them to be able to separate signal from noise. Lanni had to first get the students to a place where they could understand that the signal was missing, and only then could they start troubleshooting their experiments.

        • Nicole –

          I personally often employ a cognitive dissonance model of teaching. Rather than just ‘splainin how things work, you create a state where the students realize deeply that there’s something that they don’t know, and they don’t understand how to find the answer. And then they have to explore how to find the answer. Essentially, it’s about creating a “need to know” what the answer is.

          My main goal as a teacher is to help students better understand their own learning process, and develop a set of metacognitive skills and strategies as learners. In that regard, my “cognitive dissonance” approach can sometimes be dis-incentivizing – particularly for those students who don’t have much confidence in their own learning process or a clear metacognitive strategy for learning; their identity as learners is fragile, and discouragement is easy, and they can reach conclusions such as they “can’t do microbiology” when they experience confusion and think that they’re supposed to know the answers. And so I use that method judiciously, and with care for matching it to the particular student when possible.

          Seems to me that roughly speaking, this teacher was using a similar method.

        • Nicole said: “…students didn’t have strong enough background expectations about what the results should have looked like for them to be able to separate signal from noise. ”

          I think this is where “discovery” oriented teaching runs off the ferry dock that it thought was going to be a bridge on its road trip to science learning. Students need to have X amount of background knowledge to develop sensible hypotheses and the first two-four years of undergrad education are what give them that background. Without the it, they’re hunting for whales in Lake Erie with no clue as to why that’s a bad idea.

          I’m confused about what she refers to as “failure” in science. I can’t recall ever getting an inexplicable blank in my data! That part of the story is hard for me to grasp. They got nothing and were unable to develop a plausible explanation for that? It seems weird. My experience is that science takes baby steps from one thing to the next, which makes most failures explainable. In thinking back to the papers I’ve read using techniques that later were shown to not work as claimed, the techniques are often controversial from the outset because people can already suggest reasons why the assumptions that underlie them are wrong.

          What do you think she means by “failure” in science?

  7. I heard about some recent changes to undergraduate physics lab design that centre around making labs more about experimental design rather than actually doing an experiment. For example, part 1 of a lab would be using a pendulum and measuring the frequency as a function of length, but in part 2 students would be expected to come up with their own experiment using the pendulum setup. You can end up with groups that try to measure the force of gravity or something and fail, but (presumably) they learn a lot about experiments.

    I won’t pretend to know what the “best” way to teach something is, but repeating a bunch of straightforward “known good” experiments doesn’t make someone a good scientist.

  8. I am surprised no one mentioned here the fairly well known quotes about failure by Thomas Edison:

    “Every wrong attempt discarded is another step forward”

    “I have not failed. I’ve just found 10,000 ways that won’t work.”

    My experience is unless you are very lucky or extremely gifted, the above quotes are a pretty accurate reflection of much of the process of research. My limited experience these days with younger researchers is they don’t seem to to have learned to approach things with some skepticism, in particular if it is published somewhere it must be correct, more so if the methods have long, fancy names.

Leave a Reply

Your email address will not be published. Required fields are marked *