Confusing reliability with validity

This note by Steve Hsu on the history of the Wranglers (winners of a mathematics competition held each year from 1753-1909 at Cambridge University) reminded me of my experience in the U.S. math olympiad training program in high school. At the time, it seemed clear that we were clearly ordered by ability (with my position somewhere between 15th and 20th out of 24!). In retrospect, I think there are a lot of tricks to solving and writing up solutions to “Olympiad problems,” and I didn’t know a lot of these tricks.

It was the usual paradox of measurement: I was confusing reliability with validity, as they say in the psychometric literature.

In retrospect, it worked out well for me to learn (even if falsely) that there were 15 or 20 kids my age better than me in math. This made me realize that a career as a “mathematician” (to the extent I understood what this meant, based on my experiences up to the age of 16) was not for me. Given what I know now, I think I would’ve wanted to be a statistician even if I’d been the #1 kid at the Olympiad. Luckily this didn’t happen to me.

And now for the most important part

One of the tricks that I didn’t know about in the math olympiad training program is to plod along without giving up. Sometimes the direct approach works, solving a problem by eliminating all alternatives. That’s a “trick” that’s useful in a lot of areas of academic life. For years I’ve been trying to get this message out to students: If you get stuck right away, don’t just stare at your desk and give up. Instead, work actively. This is a point I made in chapter 19 of the ARM book (in particular, the graph on page 416). My theory is that students at top universities have succeeded pretty well by being able to solve problems quickly; they haven’t really needed to develop the tools to solve problems systematically by brute force, the way I like to do it.

I think they did try to explain this principle to us at the olympiad program (How to Solve It, and all that), but I didn’t ever get the point, partly I think because the problems were so artificial that there only seemed to be a point to solving them if it could be done easily or through some clever trick.

7 thoughts on “Confusing reliability with validity

  1. Andrew, I went back and read the earlier post you linked to, about your days at MIT:

    "I ended up deciding that I didn't understand physics well enough to continue with it, but that's another story."

    Hey! Tell us more about that — definitely seems worth a blog post :-)

    Also, tell us whether you believe what SLA Marshall wrote about whether soldiers actually tried to kill the enemy! (Fire rate less than 25% or something like that!) I seem to recall this was also revisited recently by Randall Collins in his book Violence (http://press.princeton.edu/titles/8547.html) and also by West Point professor Dave Grossman.

  2. My experience has been very similar to yours, Prof.Gelman. I went from being very interested (and therefore, quite good) at math in my middle school and high school years, to literally abhorring the subject in college. Looking back, it had a lot to do with the absence of intellectual curiosity amongst the teachers to demonstrate its practical applications. I used to (and even today) find it difficult to wrap my head around some theory that did not have a way in which the theory could jump off the page and become part of the physical world.

    My work has dragged me back into the subject, and what a fortuitous turn of events, I must say. And the place where I found a good intersection of math theory and practical applicability is statistics.

  3. Calling it a "mathematics competition" is odd – these are just the results of the undergraduate degree exams. They used to be rank-ordered; the top student was Senior Wrangler, the bottom one got a wooden spoon. Legend has it (although not Steve Hsu!) that the real geniuses tended to end up as 2nd Wrangler – they hadn't revised heavily, but could breeze through almost anything that was asked.

    Having to do the rank ordering led to a crazy system of exams; to separate the top 2 students one needed basically-impossible questions, on which mere mortals were never going to have a hope.

    More importantly, there's a bit of a legacy to all this; the modern exam questions have two parts, which tend to ask

    a) did you attend the lectures and grasp the basics?
    b) are you clever? (answer: "Yes" or "No")

    – the second part assesses whether you can use the material independently, say, proving a result you've not seen before.

    While I agree with you that plodding can be a good strategy, being trained to make 'clever' leaps of this sort does seem to have a place – forcing one to think deeply about what our goal actually is, and how/why the available tools may help us get there (or may not)

  4. I think you may be oversimplifying from memory. The issue with those kinds of problems is often unfamiliarity, which is why training helps; you pick up a set of approaches you can try. Without the training – or luck of familiarity – you don't know how to plod.

    I think Feynman's comment about this is apt; he refers to the unique toolbox each person has. That's partly brains, partly exposure.

    As for me, I could understand the physics great but the math frustrated me. I had no interest in calculating Hamiltonians, etc. in part because I was never good at memorizing all the "oh! that's a shortcut" or "oh! I know what this integrates to" stuff.

    One metaphor I use in my head is that much of this stuff happens at the assembly language up to a lower level language with the api's largely obfuscated. This excludes the kind of people who function better at manipulation of api's, meaning certain designer types or higher level language. One of my hopes is that the future will allow more at this level, that we'll have more transparently reusable parts that allow manipulation at a higher constructive level.

  5. Steve:

    It's not much of a story. I realized that there were things we were discussing in my physics classes that the instructor understood but I didn't. I didn't have a deep understanding of physics.

    Regarding Marshall and the rate of fire, I know nothing more than is given in the references to my article. My contribution here was to recognize the inappropriateness of the prisoner's dilemma in Axelrod's example; the other stuff is just background.

    Cam, Jonathan: Regarding the "plodding" strategy–yes, you certainly have to have a lot of background knowledge to be able to solve math olympiad problems using a plodding strategy. What I was really saying was, given what I already knew and could do, I could've done much better at these problems if I hadn't always been looking for a trick solution. And this is true at higher levels too, such as in my graduate statistics classes. Often, the right way to go involves mathematical understanding and also the patience to fit and understand lots of models.

  6. In order to solve a problem you need to (a) spend time trying to solve it, (b) have some clue as to how to solve it, and (c) have enough brainpower to solve it given your level of cluefulness and time commitment.

    What's interesting about contests like the Putnam contest (U.S. college math) and TopCoder (open programming) is that average skill-level professionals can't solve the hardest problems with only the knowledge in their heads, even given lots of time.

    I'm relatively very well educated in how to turn the crank for the kinds of programming problems in TopCoder, but it can take me three or four hours to do one of the harder top-tier problems, even knowing roughly what the answer's going to look like before I start. As to knowledge, it's definitely faster for problems related to my day job (text processing) and slower for things that are esoteric to me (like geometry). The winners of the contest solve these hard problems in a few minutes, problem after problem. Yes, they have a bit more experience at these kinds of problems, but I think it's fair to conclude they have much bigger mental muscles turning the crank.

    As a math major in college, I'd be lucky to solve one of the questions on a Putnam exam given a week to think about them.

    PS: Like many applied math types, I went through the progression from "I'm one of the best at math in school" to "I'm average among math majors at college" to "I'll never be able to compete professionally". Unlike Andrew, I sort of wish I had that genius-level math insight, at least for a week or two (no, I'm not asking to be on Twilight Zone). I also realize now there's more to professional success than brainpower; so much of it's project management, communication, and effort.

  7. Bob: I think I could do Olympiad and Putnam problems pretty easily now, especially if I spent a few days practicing. I don't think I'd come in anywhere close to first place, but I'd do fine. All these years of experience would help. I think there are lots of practicing applied mathematicians in a similar situation to me, who'd do well on these exams. Regarding your last point, sure, I'd like to have more math insight–every once in awhile, it would help a lot–but I think statistical insight is more useful to me.

Comments are closed.