Understanding the hot hand, and the myth of the hot hand, and the myth of the myth of the hot hand, and the myth of the myth of the myth of the hot hand, all at the same time

Josh Miller writes:

I came across your paper in the Journal of Management on unreplicable research, and in it you illustrate a point about the null hypothesis via the hot hand literature.

I am writing you because I’d like to move your current prior (even if our work uses a classical approach). I am also curious to hear your thoughts about what my co-author and I have done.

We have some new experimental and empirical work showing that the hot hand phenomenon can be substantial in individual players. We think our measures are more tightly related to hot hand shooting (rather than cold hand shooting).

Also, we find clear evidence of hot hand shooting in Gilovich, Vallone & Tversky’s original data set.

Our new paper, “A Cold Shower for the Hot Hand Fallacy,” is on SSRN, here.

We have three comments on your discussion in Journal of Management:

1. The earlier reported small effect sizes come about for three main reasons (1) pooling data across players means the guys who don’t get hot (or fall apart) attenuate average effects so you don’t see the hot guys, (2) the measurement error story of D. Stone (who I see commented on your blog once), (3) not every streak is a hot streak, so the real infrequent, but persistent hot hands get diluted; you would need to measure something else in conjunction with shot outcomes to pick this up.

2. We overturn the basic findings of GVT: it is not a fallacy to believe that some players can get substantial hot hands. We have the proof of concept in 3 separate controlled shooting studies (GVTs, on earlier one, and our own). We have more discussion on this in the paper.

3. Now, while it is no longer that case that believing in the hot hand is a fallacy, there remains a question which you pose as answered: to what extent do players, coaches or fans overestimate hot hand effects based on shot outcomes alone? An important first point: this overestimation wasn’t the main point of GVT because it is really hard to show that players and coaches are over-estimating the impact via the decisions they make (stated beliefs would be a little silly, but these haven’t been asked cleanly). GVT did something cleaner: show no effect and then you know any belief in the hot hand must be fallacious.

The question I have for you: do you think Bill Russell, thought to be the greatest teamplayer of all time, was wrong when he said this? (retirement letter to SI):

People didn’t give us credit for being as good as we were last season. Personally, I think we won because we had the best team in the league. Some guys talked about all the stars on the other teams, and they quote statistics to show other teams were better. Let’s talk about statistics. The important statistics in basketball are supposed to be points scored, rebounds and assists. But nobody keeps statistics on other important things—the good fake you make that helps your teammate score; the bad pass you force the other team to make; the good long pass you make that sets up another pass that sets up another pass that leads to a score; the way you recognize when one of your teammates has a hot hand that night and you give up your own shot so he can take it. All of those things. Those were some of the things we excelled in that you won’t find in the statistics. There was only one statistic that was important to us—won and lost.

Because if you read GVT 1985 and GT 1989 papers in chance, that is the message you get.

Here’s the relevant passage from my recent article in the Journal of Management:

As an example, consider the continuing controversy regarding the “hot hand” in basket- ball. Ever since the celebrated study of Gilovich, Vallone, and Tversky (1985) found no evidence of serial correlation in the successive shots of college and professional basketball players, people have been combing sports statistics to discover in what settings, if any, the hot hand might appear. Yaari (2012) points to some studies that have found time dependence in basketball, baseball, volleyball, and bowling, and this is sometimes presented as a debate: Does the hot hand exist or not?

A better framing is to start from the position that the effects are certainly not zero. Athletes are not machines, and anything that can affect their expectations (for example, success in previous tries) should affect their performance—one way or another. To put it another way, there is little debate that a “cold hand” can exist: It is no surprise that a player will be less successful if he or she is sick, or injured, or playing against excellent defense. Occasional periods of poor performance will manifest themselves as a small positive time correlation when data are aggregated.

However, the effects that have been seen are small, on the order of 2 percentage points (for example, the probability of a success in some sports task might be 45% if a player is “hot” and 43% otherwise). These small average differences exist amid a huge amount of variation, not just among players but also across different scenarios for a particular player. Sometimes if you succeed, you will stay relaxed and focused; other times you can succeed and get overconfident.

Whatever the latest results on particular sports, we cannot see anyone overturning the basic finding of Gilovich et al. (1985) that players and spectators alike will perceive the hot hand even when it does not exist and dramatically overestimate the magnitude and consistency of any hot-hand phenomenon that does exist. In short, this is yet another problem where much is lost by going down the standard route of null hypothesis testing. Better to start with the admission of variation in the effect and go from there.

And here is my response to Miller:

What is your estimated difference in probability of successful shot in pre-chosen hot and non-hot situations? I didn’t see this number in your paper, but my impression from earlier literature is that any effect is on the order of magnitude of 2 percentage points, which is not zero but is small compared to people’s subjective perceptions. My own experience, if this helps any, is that I do feel that I have a hot hand when I’m making a basketball shot, but that feeling of hotness is coming as a consequence of the pleasant but largely random event of my shot happening to fall into the hoop. To me, the hot hand fallacy is not such a surprise; it is consistent with the “illusion of control” (to use another psychology catchphrase).

The Bill Russell quote is interesting, but given the findings of the classic hot hand paper, it is not surprising to me that a player would view the hot hand as a major factor, whether or not it is indeed important. Players can believe in all sorts of conventional wisdom. Of course I agree with Russell’s statement that all that matters is wins and losses. I’d guess that points scored for and against is a pretty important statistic too. All the other statistics we see are just imperfect attempts to better understand point-scoring.

To which Miller replied:

On your first question, I’ll give you the quick measure but it depends on the player. Lets compare the hit rate after making 3+ shots in a row to the hit rate in any other shooting situation. For the player RC, because he was nearly significant in the first session, we followed up with him 6 months later to see if this predicted a hot hand out of sample, and it did: on average his boost was 8-9 percentage points across all session (see p. 25 for the difference). The hottest player in the JNI data set of 6 players, with 9 different shooting sessions (he had periods of elevated performance in all sessions), had a boost of around 13 percentage points (see p. 28 for the difference). In GVTs data, 8 out 26 shooters had a 10 percentage point plus boost , and 4 more than plus 20 percentage points (see page 29 for a brief report). Its clear some player’s have substantial boosts in performance, but yes, the average effect is modest as in previous studies, around a 3-5 percentage point boost. I think the important point is not that the hot hand is some big average effect, but that some players have a tendency to be streaky.

On Russell. Players receive information beyond sequential shot outcome data; they have a long experience playing with teammates and they can get cues on a player’s underlying mental and physical state to use in conjunction with sequential outcome data, so in that sense outcome data may be more informative for them than it would be for a fan. Further, the mechanism to get hot doesn’t always mean you have a positive feedback of shot outcomes into a player ability, another mechanism is endogenous fluctuations in mental and physiological state, or exogenous input such as energy from fans, teammates, etc. In this case, for teammates and coaches, the cues on mental and physical state are more important than the shot outcome. Notice if you take the original Cognitive Psychology paper and two Chance papers, the message is against both mechanisms.

Now, just to clarify, in my personal view, the tendency for spectators to attach too much meaning to streaks is clearly there, we can see it any time when we watch the 3-point contest, the videos are on youtube. Anytime a player hits three shots he is “heating up.” This is the pattern of intuitive judgement that GVT identified, and this is interesting psychologically, and it was predicted by previous lab experiments. Instead, we approach it from the perspective of whether there is strong evidence that players and coaches are wildly off. Our evidence suggests their belief can be justified, but we don’t demonstrate that it is in any particular game circumstance (no one has showed that it isn’t!). If you look at some recent interesting work from Matthew Goldman and Justin Rao, on average, players do a surprisingly good job allocating their shots.

Good stuff. I’ll just say this: I’m terrible at basketball. But every time I take a shot, I have the conviction that if I really really focus, I’ll be able to get it in.

45 thoughts on “Understanding the hot hand, and the myth of the hot hand, and the myth of the myth of the hot hand, and the myth of the myth of the myth of the hot hand, all at the same time

  1. This is closely linked to the controversy in finance over whether active investors can beat the market (outside of merely getting lucky). Apparent master investors, it is claimed, are accidently getting rich because they’re the upper end of the random spread of investor performance about the mean.

    In both cases, statistician’s sometime inability to find evidence for the hot hand in basketball or legitimate investing prowess says a great deal more about statistics than it does about basketball or investing.

      • Suppose that 10% of people have real skill at investing, and 90% are somehow mostly luck. Further pretend that the people with luck have a range of outcomes that includes the range that you could be expected to have with skill as well. Finally, pretend that you interpret probabilities as frequencies and you construct the frequency histogram from a large dataset which is a random sample from the population of people.

        Will you be able to determine whether “skill” exists? I think the answer is no, not with that kind of methodology.

        What these people seem to be doing with the Hot Hand is specifically trying to figure out what the short term effect of doing well is on individual players. Aggregating across multiple players to increase sample size ignores the variation in streakiness.

        On the other hand, it’s entirely possible that people interpret “hot hand” and “investment skill” far beyond what is reasonable, and that the signal should really be smaller than intuitive assessments of it. So I think you can have it both ways: skill and feedback exists, but people’s perception of it exaggerates its magnitude.

        • It’s worse than that. You could have 100% of players experiencing hot or cold hands as a result of some real underlying causal mechanism and it still look random. The chief property of randomness is that lots of things look random regardless of what caused them. It’s a kind of default that takes both a strong and an unusual cause to give you something else.

          There’s more than enough evidence to suggest hot hand and investor prowess is real before beginning the analysis. Statisticians should merely go about determining (bounding/estimating) it’s size on individuals.

          If someone needs more evidence that it exists, then psychologists need to hook nba players up to neuroscanners or something, because statisticians can’t get it unless the effect is so big everyone could see it without statistics.

        • What if I had simulated that exact same shot data with my known deterministic algorithm for simulating “random” numbers?

          Would it be wrong of me to claim an underlying unifying explanation (a cause!) for the data existed even though a statistical test said there were no significant patterns?

        • The easiest way to disambiguate private knowledge from private delusion is simply ask people to make between-the-shots prediction, based, for example on a videotape stopped at the exact moment the ball leaves the hands of the shooter. Anybody tried that?

        • “There’s more than enough evidence to suggest hot hand and investor prowess is real before beginning the analysis.”

          Like what?

        • Like every last person who’s ever experienced being in the zone, or had their confidence shot combined with all those known mind-body effects (which often aren’t big but still there) plus the knowledge that each basket is attempted by an incredibly powerful computation/sensing device exposed to decades of “training data”.

          That’s our background knowledge and it’s a hell of a lot better evidence than those statistical analysis. Notice I suggested there are ways to get more definite proof, it’s just unlikely to come from statistics.

          That’s not because the statistics was poorly done necessarily, it’s because using this kind of statistical data mining to analyze a massive neural system + chaotic physical system is an incredibly weak scientific tool even when done perfect.

        • What if each player goes through hot and cold cycles, but the effect is wiped out in the aggregate because this applies to both the defender and person attempting the shot?

        • And what if there is a “publication bias,” i.e., there’s a bunch of very astute investors who aren’t included in the data?

        • MikeM,

          Definitely. Also, the fact that superior investors are going to be wealthier, and hence smaller in number than the less wealthy, means you’re inherently searching for a small signal.

          Another complication is that investors aren’t being compare to an absolute standard. They’re being compared to the market, which is an kind of average of all investors.

        • There is an inherent weakness to significance testing.

          Significance testing counts on finding unusual patterns to “see” causes in the data. The more unusual the pattern you’re looking for the bigger is the “usual” category. Since the vast majority of causes lead to “usual” patterns simply because the “usual” ones are more numerous, significance testing can’t “see” those causes.

          That massive collection of “usual” patterns is like a giant wall separating most causes from the Statistician.

    • So true, Anonymous. A subject dear to my heart, yet I find few on my side who can express the point as well as you are doing in this thread. Thanks for fighting the good fight.

    • If these “master” investors have “hot hands” in terms of stringing together frauds and insider trading, I can see what you mean. Bernie Madoff had a pretty hot hand there for a while.

      • I guess I’d make a similar comment with investing as with basket ball. As soon as we know that some people have an edge, the only question remaining is to get an interval estimate for the size of each investors edge.

        We do know some people have an edge. We know for a fact that congress critters are legally trading off insider information and making abnormal profits as a result.

  2. I wouldn’t be surprised if the hot hand effect is much larger for amateurs than pros. Pros have practiced the shooting motion enough that it’s almost the same every time and most variation in the motion from shot to shot is random. Amateurs might occasionally alter their form unintentionally and sometimes the changes will be beneficial and sometimes detrimental.

    • (I mention this because it seems like Miller’s study was done on amateurs, but I’m not sure. And of course I’m considering D1 college players ‘pro’ in the statement above.)

      • Hi Zach. thanks for the comment. To clarify, our shooting study was not conducted on amateurs. The participants were paid semi-pros, one of whom is now in the 2nd division of Spanish pro ball, which a a high level league. Gilovich, Vallone & Tversky (1985) conducted a study with with players on the Cornell University, Men and Women (D1). Jagacinski, Newell & Issac (1979) conducted a study with “former college-level players,” one of whom was a former all-american.

        I think your point is an important one, though I would guess that professional vs. amateur is not the key distinction. Plenty of NBA players, usually centers and some forwards, are not very good shooters, you might even call them amateur. These players may be more likely to lose their focus and get a cold hand, which is different than the hot hand. In our study, because the participants are from the same team, not all of them are shooting specialists, some are forwards and centers, and we find evidence of a cold hand as well, though we don’t get into the details of the cold hand in the paper. The player with the largest hot hand effect size in our study, RC, is a highly skilled shooter with consistent shooting mechanics (which is clear in the videos we have).

        • These data are from a situation where the shooter isn’t defended, shoots repeatedly from the same spot, and takes the shots a few seconds apart (otherwise it’d take a long time to do 300, right)? That was the impression I got.

          I think it’s a lot easier to get into a rhythm when you can stay in one spot and shoot over and over compared to a situation where you are shooting from different spots, 1+ minute apart, with a defender on you. So my guess is that the hot hand effect you will see in experiment will be orders of magnitude bigger than one in a real game and my impression is that its not that big to start with here, so is this plausibly “non-zero” in an actual game situation?

        • Hi Steven

          For effect sizes, they are relatively big to start with, a 10 percentage point boost is like going from the 50th percentile to the top, if it were to happen in game situations.

          In terms of generalize from controlled shooting to games, specifically on the issue of not varying shot location, GVT’s study kept them at the same distance and had them move in an arc. If you are wiling to assume the 10 percentage point boost for the 8 players in the GVT study would hold in a larger sample (4 out of 26 had a 20% boost, which is significant already), then that indicates location is not the crucial issue. The reason why we did not vary even shot angle is because hit rates are not the same from each angle, so we would need a lot more shots than we collected to control for this variation and benchmark ability. I do agree that it seems like you can probably calibrate your shot better if you stay in the same spot, perhaps like calibrating a canon or some other artillery piece, but the only evidence we find of calibration in controlled studies is in the first few shots (which is in line with what people have found in free throw shooting), but after that players seem to hit their shots around 50% looking at different time windows (in the paper).

          For in-game evidence, in the paper we discuss this recent work by Andrew Bocskocsky, John Ezekowitz and Carolyn Stein: http://www.sloansportsconference.com/wp-content/uploads/2014/02/2014_SSAC_The-Hot-Hand-A-New-Approach.pdf , which has evidence that is consistent with a modest average hot hand effect. Because data is pooled across players, it is possible that some players have a substantial hot hand effect and it is being diluted. If you want to see more, in the paper we detail why in-game data is so challenging to deal with, and we present a case for why we should infer that there is a hot hand effect in games, even if the smoking gun evidence isn’t there yet (we might need to develop measures of a players mental or physical state along with all the controls for shot difficult)

        • 10% looks big when you put it that way but in a controlled experiment where you are on the floor, who is defending you, the kind of play you run, etc. are all going to have much bigger effects than 10% but in a real game none of them will give you that kind of edge.

          The Ezekowitz paper puzzled me when I saw it at Sloan. They find that “hot” players, as in players who are on a streak, are less likely to hit shots, but when you redefine hot as players who beat the expectations of their model, then hot players tend to be persistently hot (a little bit). But since they don’t control for defender, home or away (I think), and countless other factors, you would expect some positive serial correlation. If you exploit a mismatch like a big 5 posting up a stretch 4 playing center repeatedly and other team can’t substitute or adjust fast enough you’ll see positive serial correlation but that isn’t really what most people have in mind by hot hand. The first definition of hot seems more like what Bill Russell had in mind when he says give the hot guy the ball. They also don’t appear to adjust their SEs for the fact that p-hat is estimated. With a big sample you’d think the SE is small so it doesn’t matter much but their model is heavily saturated so maybe it isn’t, I don’t remember.

          I agree with Andrew that the null of “no hot hand” is crazy but I think it’s pretty reasonable to think that the cognitive costs of taking advantage of that could easily exceed the benefits. Does Greg Popovich play the hot hand or just focus on sticking to the general gameplay, exploiting mismatches, etc.? That would be interesting to know.

        • Nice points.

          I agree completely that game data is difficult to work with, but Bocskocsky et al. (and Justin Rao before them, http://www.justinmrao.com/playersbeliefs.pdf) did their best with the data available. It is not obvious what would happen if more controls were added. The original Gilovich et al. study found that performance drops after recent success, but Justin Rao demonstrated that when controlling for just a few measures of shot difficulty (60 games 2007-08 Lakers), this performance drop goes away. Bockoscky have more granular controls for shot difficulty by employing optical tracking data, and, as you know, they find a modest performance increase after recent “success”, although they lose so much statistical power they have to pool data across players—which turns the focus to average effects. Leaving other issues aside, what these studies show is, to the extent that shot difficulty increases due to recent success, when you control for this, the estimated hot hand effect increases. Now there are other reasons shot difficulty may increase in ways they didn’t control for, for example, if a player has exceptional recent performance, superior defenders may be assigned to him, the defense could start being more aggressive & physical with him when he is not shooting, or the assigned defender could expend more energy on the little things (for example, occluding the shooters visual fixation at the right time, see Vickers: http://bit.ly/1l1gt4P, or Oudejans et al..: http://bit.ly/1sPSrst). These omitted variables would lead to an understatement of the effects. There is another side, as you mention, and we acknowledge this in our paper: there is between game variation in performance unrelated to recent player performance, like the quality of the defender for that game (as you mention), the quality of the opponent, the player’s health on that day, and much much more. These omitted variables would lead to an overstatement of the effects. Where does this leave us? Despite the richness of modern data sets, we probably cannot find conclusive evidence for or against powerful in game hot hand effects. What we can say, is that when we control for all these factors and have players perform the exact physical act that they perform in games, in a controlled setting with strong financial incentives to be consistent, we find substantial hot hand effects. So the best available data, says (1) it is not a fallacy to believe in the hot hand, (2) the effect can be powerful in some players. For the smoking gun, we might have to wait until there are ways to get real-time measures of a player’s underlying mental or physical state, so we can ignore recent performance entirely.
          It is tempting to conclude: if it is so hard for us to find evidence for or against in-game hot hand effects, it should be pretty hard for teammates or coaches to detect it in a useful way. In our view this is not obvious. Coaches and teammates have thousands of hours of experience with each other and may see cues to a shooters underlying physical or mental state that a fan (or researcher) cannot see. If a coach or teammate sees that a player has consistent shooting mechanics or the right body language, this may be informative in conjunction with the player’s recent shooting performance, and it may not be too costly to see this. I think it would be instructive to ask Popovich and other coaches what their opinions are on this. If a player gets a 10% boost in a game, even if this much of a boost were limited to unguarded shots, there is an incentive to try to create more of those situations. Seeing how well calibrated players and coaches are would be a natural next step, and there may be a way forward (see Goldman & Rao, http://bit.ly/1sAWGLT).

          and thanks for your interest!

          Joshua (& Adam !)

  3. The best, although specific, example providing evidence that the hot hand is a real?

    Kobe Bryant’s 81 point game. In a single game, he scored from anywhere, on anyone, at anytime.

    • The best, although specific, example providing evidence that people over-interpret random occurrences as proof of their theories?

      Kobe Byrant is really, really good. The probability that he would drop 81 in one of his 1200+ professional games was, a priori, fairly high. Sure, you could say that means he was “hot”, but that’s like saying I’m “hot” when I win 10 hands of Blackjack in a row after playing a few thousand hands (assuming optimal play, Prob[winning 10 in a row] ~ 1/1200 or so).


      • They’re not the same at all actually. Assuming you’ve memorized Blackjack Basic Strategy you can carry out optimal (non-card counting) play easily, so whether you win 10 in a row depends on the random shuffling of the cards and not skill. Mental state in particular makes no difference.

        For Kobe to get to 81 depends on lots of things, including skill, opponents, diet, the temperature, what he did the day before, and possibly his mental state. So the question isn’t really “hot hand vs random” it’s “hot hand vs other explanation”.

        But if you’re so sure mental state can’t affect things, what if Kobe’s parents died in a car accident the say day as the game and he scored 2 points after playing the whole time?

        How confident would be in that statistical analysis claiming the outcome was just a random fluctuation and wasn’t the result of having a cold hand?

        • Fair enough, but my point wasn’t intended to argue that there are no external forces that affect how well people play on a particular day. Sure, people have better/worse days. And sure, I know the feeling of “being in the zone.” And no, I don’t believe that whether or not a shot is made is just a realization of a Bernoulli variable.

          My point is just that a good game is not evidence of a hot hand. It is evidence of a few small margins going in the right direction. Calling that a “hot hand” instead of “getting some lucky breaks” strikes me as basically using an anecdote to confirm a theory.

          For the record, I’m not sure whether there is or isn’t a “hot hand”. I suspect that, depending on how it is defined, it is on the spectrum between “tautologically true” and “an over-interpretation of random noise”, and so how we define it is probably the most important. I guess that I would say that proving a “hot hand” exists would require showing that there are some set of physiological/psychological states such that, when those are present, the probability that those “few small margins” go your way is greatly increased.

        • Once you admit a single instance in which mental state affects play, then all that remains is to estimate its size. The debate over whether it can happen is settled.

          If as you say, a good game is not clear evidence of a hot hand for an individual player, then the interval estimate for the affect size for Kobe should include zero.

          If you want to shrink that interval estimate to make it more definite, or better yet, learn what affects these things, then you need to stop doing statistics and start doing science.

        • “The debate over whether it can happen is settled.”

          That is Andrew’s point. People have good and bad days. If you get the flu you’re probably going to miss more shots you would have made if you weren’t sick.

          A lot of people think one recognizable, important factor is “the hot hand.” If a guy is hot it’s the opposite of having the flu–its easy to detect and you capitalize on it by feeding him the ball (like Bill Russell said). And a lot of people think this is all an illusion. You’ll barely get any advantage from exploiting the “hot hand” because (1) it’s hard to recognize in real time and (2) people don’t get that hot that often, if ever, for it to be worth focusing on instead of making the extra pass or focusing on setting a good pick.

          Statistics is exactly about quantifying the magnitude of the effect. Science is, traditionally, mostly about using a theory to make a prediction (hypothesis) and then seeing if it is refuted or not (“hot hand isn’t real”, hypothesis has basically been rejected). The magnitude of the rejection (how hot?) often isn’t that important since the theory is dead either way.

        • August 2015
          Re: “I guess that I would say that proving a “hot hand” exists would require showing that there are some set of physiological/psychological states such that ..”

          I’m surprised – or maybe I just don’t know the topic – I’m surprised that “Hot Hand” isn’t investigated with reference to the science of human attention. Being in the ‘zone’ seems to exist, as a state/degree of mental attention or concentration. Anyway, Yes there was a video that purports to show detection of a type of mental state, and thereupon gave a prompt: Essentially: “When should the amateur archer shoot the arrow in his/her hand?” Their Demo shows: “When the amateur archer’s mental state has certain similarities to the EXPERT archer’s mental state when the EXPERT shoots his/her usual ‘bull’s eye’ best, prompt the amateur archer to shoot the arrow”.
          The show was: a July 3, 2013 broadcast of “Down The Rabbit Hole”; The video, [Discovery Science channel, show: Through the Wormhole: Season 4, Episode 6 , “Can Our Minds Be Hacked?” (3 Jul. 2013)] is posted at: http://www.advancedbrainmonitoring.com/abm-on-through-the-wormhole?portfolioID=10528 [6 minutes];
          A transcript of the video is at: http://www.springfieldspringfield.co.uk/view_episode_scripts.php?tv-show=through-the-wormhole&episode=s04e05 ; search for text: “If your thoughts can be decoded, could they be altered?”.
          The researcher who developed the device (Chris Berka), has company website: http://www.advancedbrainmonitoring.com/about-us/ ;

          I found your website from a Google search, where my question(s) was/were:
          – Was there any breakdown of Hot Hand Data by: Amateur vs. Professional [I found a clarification above, thank you [e.g., not all pros are ”expert’ shooters’]]; and,
          – Was there any breakdown of Hot Hand Data by: basketball shooting performance after: start of game, and other time breaks;, thereby exploring the 2 phenomena which have been labeled ‘directed attention fatigue’, and/or, ‘decision making fatigue’.
          Thanks for your posting.

    • The number of points doesn’t really tell you anything.

      Bryant was 28/46 for field goals that night, which is a bit less than 61%. Career, he’s about a 45% shooter. If a 45% shooter takes 46 shots, there’s about a 2.5% chance he will hit 28 or more. (I’m ignoring free throws, for which he was a creditable but easily believable 18/20, and ignoring the distinction between three pointers and other field goals). A team plays 82 games per season in the regular season alone, so if Bryant took 46 shots every game we would expect a couple of 81-point games out of him every year!

  4. I think that a hot hand effect will be plausible when the task is very difficult and requires the most from reflexes, skill, and mental state. When all of these line up, the shot can be made, the ball can be batted, etc. Otherwise, maybe not. My own best personal experience comes from ping-pong, which I was good at. I would get short periods when I could slam the ball successfully time after time, then I’d start missing. Or I would be able to return any shot, time after time, no matter how tricky. The successful periods always felt different from the normal ones.

    Not saying that is evidence, just to give plausibility to the possibility of a hot hand.

    If we think of each of the the factors involved – reflex state, etc., as continuous curves as a game progresses, they would rise and fall. When the curves for all factors become elevated at the same time, we might get a hot hand. Detecting this would require a different kind of analysis than just looking for serial correlations between shot outcomes.

    If a player can in fact somehow act to stay in such a state, one prediction would be that when the period of a hot hand is over, performance would tend to be worse than usual for a time. This is because it would take some mental energy to stay in the hypothetical combined productive state, and at the end that reserve would be depleted, much as “will power” has been shown to be depletable.

    • I like your thinking here, and I think it nicely illustrates an important point about Bayesian methodology.

      The main thing that I want to do in creating models is understand some mechanisms, and distinguish between mechanisms that are real vs those that don’t properly explain the world. Here you specify a hypothesis about mechanism, that several aspects of performance vary in time and can synchronize to produce a high performance period, also you identify a plausible consequence, that maintaining the heightened state requires effort, and therefore after the high performance period should come a noticeably reduced performance period.

      All of those ideas could be operationalized into a given dynamic model, and the predictions of the model could be used in a bayesian context to produce a likelihood, or a pseudo-likelihood as in ABC method, and we could get inferences in a straightforward way using bayesian methods without having to invent an ad-hoc approach to inference.

      It seems like there are too few researchers who have the subject specific knowledge, the mathematical modeling skills, and the statistical and data analysis skills, to go after mechanisms in this way.

    • I’d like to co-author a paper on the hot hand with Steph Curry (http://youtu.be/Dbk7BlCShsE)! But seriously, in those ‘hot hand’ highlights it seems obvious that he and his teammates believe he’s got the hot hand, as reflected in Curry’s demeanor and his increasingly off the wall shot selection. All of which is to say, +1 to Passin’s comment.

  5. Could this be another case of fixating on average effects? The discussion of the hot hand effect, at least as I’m familiar with it, carries the implicit assumption that the effect either applies or doesn’t apply in general, across a diverse population of players. If some shooters are streaky relative to others, consistently, this would indicate a dimension of heterogeneity that ought to be taken into account. It might well be the case that there is no discernible hot hand effect averaged across the entire population, or that it is too weak to matter, but that it could have a lot of practical importance for a subset of players.

  6. I find the whole concept odd: of course people hit shots in a row but that doesn’t extrapolate to a generalized notion of “hot hands”. Even if you constructed a random game or, even better, one in which players are matched in abilities, heights, etc., you’d still have shots made in a row just by chance. It might, probably should happen less if you take out all the gamesmanship of creating mismatches, types of defenses, plays run, etc. but so freaking what? If you have a forward who can shoot and he’s being covered by a guy 5 inches shorter, then he should make more shots. And if they run plays that break a guy open, he should make more shots just because he’s open.

    To change the ideas a bit, I like the absolutely unspellable Mihaly Csiksgentmihalyi’s research into “flow states”. (Thank God for copy and paste.) He finds that people experience a sense of control or flow in the game and that better players experience it more. Take a Larry Bird or Bernard King – he of the 12 foot jumper – and think of them not as “shooters with hot hands” but as players of top ability who engage in the flow of the game and thus feel more in control. That means they are, I would think from the evidence, more likely to place themselves in the correct positions for the play (and the correct body position for a shot because that’s a huge variable – see Ray Allen for how to shoot a long jumper coming off a pick). It isn’t reducible I think to a “hot hand” when it’s viewed as a player being more in “flow state”; “flow” may lead to an assist or a rebound or simply better positioning and factors into what makes that player so good.

    BTW, if you aren’t familiar with the work, it has some insight into behavior that I find interesting. Example: people become addicted to TV because it is a low investment, low payback activity in which study participants report low engagement and low “flow state”. A hobby – of any kind, active or intellectual – requires a substantial upfront investment but participants report 4 times the “flow state”. This speaks to why “no TV for a week” doesn’t work: you need time to develop or otherwise do high investment/high return activities so you find the low payback TV boring. And even then, it’s easy to get sucked into low investment, as anyone who skips the gym and vegetates knows.

    • “…4 times the ‘flow state’…”

      I feel like this fragment deserves an entire post on the subject of the relationship between measurement and statistics, via a close reading of the rhetoric and diction of academic quantitative research.

      This comment should in no way be be interpreted as related to the substance of the post above.

      • AFAIK, the measurements were done using standard experimental methods such as pressing buttons while engaged in activities, observations and interviews. It’s been probably 20 years since I read the papers. You’d have to read the work to determine whether the measurements were done correctly and whether the statistical analysis is satisfactory.

    • I don’t think “hot hand” means just hitting shots in a row. At least, if it does, it has no content. Hot Hand refers to some kind of causal reason for shots to be hit above some baseline level. Candidate models should be something like this:

      1) there is an accuracy parameter, possibly a function of several other parameters, which determines some aspect of accuracy and the value of this parameter is for whatever reason, above a baseline.

      2) there is an accuracy parameter … etc, and its value can under certain circumstances, be increased by the very act of hitting a shot, so that there is a feedback loop where hitting several shots in a row, whether they were relatively random or not, could then influence through psychology or other means, whether a player will continue to hit shots.

      That should be contrasted with:

      3) There is a confidence type parameter among teammates or opposing team members which makes people more likely to give the player the ball, or less likely to defend well against a player who is perceived to have a “hot hand”. This is the theory that the player may not be playing better but people are giving the player more opportunities.

      4) There is a baseline level for this player and things just happen to go well in a short streak.

      The first two are plausible mechanisms for “hot hands” for the player, the third is still exploitable but not a “hot hand” for the player. The fourth is just “random noise” which is itself a pretty hard thing to define in this context. Humans on a team are not dice on a table or coin flips in the air.

  7. Something I’ve noticed over the years is that seemingly implausible hot streaks in sports often turn out, looking back, to be the revelation that an athlete or team has arrived at a new level. For example, in the fall of 1974 I was talking to a naive fan of the traditionally hapless Pittsburgh Steelers and I suckered him into betting that his Steelers would win their next five games in a row. But, little did he know, that would require the lowly Steelers to win the Super Bowl!

    Of course, he won the bet from me, and the Steelers went on to win 4 Super Bowls in 5 years. It turned out that at the moment I made the bet, the Steelers weren’t as mediocre as their history suggested, but were on the cusp of being one of the greatest teams ever. So in hindsight, it didn’t require an implausible hot streak for Bradshaw, Greene, Swann, Harris, Lambert, etc. to win five games in a row.

  8. Pingback: Low correlation of predictions and outcomes is no evidence against hot hand - Statistical Modeling, Causal Inference, and Social Science

Leave a Reply

Your email address will not be published. Required fields are marked *