**1.** Effect sizes of just about everything are overestimated. Selection on statistical significance, motivation to find big effects to support favorite theories, researcher degrees of freedom, looking under the lamp-post, and various other biases. The Edlin factor is usually less than 1. (See here for a recent example.)

**2.** For the hot hand, it’s the opposite. Correlations between successive shots are low, but, along with Josh Miller and just about everybody else who’s played sports, I think the real effect is large.

How to reconcile 1 and 2? The answer has little to do with the conditional probability paradox that Miller and Sanjurjo discovered, and everything to do with measurement error.

Here’s how it goes. Suppose you are “hot” half the time and “cold” half the time, with Pr(success) equal to 0.6 in your hot spells and 0.4 in your cold spells. Then the probability of two successive shots having the same result is 0.6^2 + 0.4^2 = 0.52. So if you define the hot hand as the probability of success conditional on a previous success, minus the probability of success conditional on a previous failure, you’ll think the effect is only 0.04, even though in this simple model the true effect is 0.20.

This is known as attenuation bias in statistics and econometrics and is a well-known effect of conditioning on a background variable that is measured with error. The attenuation bias is particularly large here because a binary outcome is about the noisiest thing there is. This application of attenuation bias to the hot hand is not new (it’s in some of the hot hand literature that predates Miller and Sanjurjo, and they cite it); I’m focusing on it here because of its relevant to effect sizes.

So one message here is that it’s a mistake to *define* the hot hand in terms of serial correlation (so I disagree with Uri Simonsohn here).

Fundamentally, the hot hand hypothesis is that sometimes you’re hot and sometimes you’re not, and that this difference corresponds to some real aspect of your ability (i.e., you’re not just retroactively declaring yourself “hot” just because you made a shot). Serial correlation can be an effect of the hot hand, but it would be a mistake to define serial correlation *as* the hot hand.

One thing that’s often left open in hot hand discussions is to what extent the “hot hand” represents a latent state (sometimes you’re hot and sometimes you’re not, with this state unaffected by your shot) and to what extent it’s causal (you make a shot, or more generally you are playing well, and this temporarily increases your ability, whether because of better confidence or muscle memory or whatever). I guess it’s both things; that’s what Miller and Sanjurjo say too.

Also, remember our discussion from a couple years ago:

The null model is that each player j has a probability p_j of making a given shot, and that p_j is constant for the player (considering only shots of some particular difficulty level). But where does p_j come from? Obviously players improve with practice, with game experience, with coaching, etc. So p_j isn’t really a constant. But if “p” varies among players, and “p” varies over the time scale of years or months for individual players, why shouldn’t “p” vary over shorter time scales too? In what sense is “constant probability” a sensible null model at all?

I can see that “constant probability for any given player during a one-year period” is a better model than “p varies wildly from 0.2 to 0.8 for any player during the game.” But that’s a different story.

Ability varies during a game, during a season, and during a career. So it seems strange to think of constant p_j as a reasonable model.

OK, fine. The hot hand exists, and estimates based on correlations will dramatically underestimate it because attenuation bias.

But then, what about point 1 above, that the psychology and economics research literature (not about the hot hand, I’m talking here about applied estimates of causal effects more generally) typically overestimates effect size, sometimes by a huge amount. How is the hot hand problem different from all other problems? In all other problems, published estimates are overestimates. But in this problem, the published estimates are too small. Attenuation bias happens in other problems, no? Indeed, I suspect that one reason econometricians have been so slow to recognize the importance of type M errors and the Edlin factor is that they’ve been taught about attenuation bias and they’ve been trained to believe that noisy estimates are too low. From econometrics training, it’s natural to believe that your published estimates are “if anything, too conservative.”

The difference, I think, is that in most problems of policy analysis and causal inference, the parameter to be estimated is clearly defined, or can be clearly defined. In the hot hand, we’re trying to estimate something latent.

To put it another way, suppose the “true” hot hand effect really is a large 0.2, with your probability going from 40% to 60% when you go from cold to hot. There’s not so much that can be done with this in practice, given that you never really know your hot or cold state. So a large underlying hot hand effect would not necessarily be accessible. That doesn’t mean the hot hand is unimportant, just that it’s elusive. Concentration, flow, etc., these definitely seem real. It’s the difference between estimating a particular treatment effect (which is likely to be small) and an entire underlying phenomenon (which can be huge).

There is another aspect to the hot hand, from the player’s point of view. When you are hot, you may change how you play. For myself, when I used to play pingpong fairly seriously, I’d get hot for maybe 5 – 10 shots. When I was hot, I could reliably make a certain killer forehand slam, among others. So I would go for those shots whereas normally I wouldn’t. I played much more aggressively. After the hot hand left – and I could usually tell right away – I would play much more defensively.

I wonder whether professionals’ game play could be studied to see if they have behavior changes like these, and if so whether they correlate with hot streaks.

Tom:

Yes, this has been studied. I don’t remember where, but I’m pretty sure I’ve seen some analyses showing that basketball players take more difficult shots when they’re hot.

Yup these are called heat checks and they can be pretty hilarious when a player makes a couple shots then chucks up something ridiculous from the logo. There’s a joke that sometimes you wish a player didn’t make a certain shot because then they end up overconfident and start taking a lot of bad shots. Which also hints at the large variation and heterogeneity among players in how they handle hot streaks and how hot they actually get.

In addition, defenses respond to hot hands. Attenuation isn’t the only issue here, it’s plausible that a player with a hot hands will see a decrease in the probability of their next shot going in. The relationship between the latent variable and the observed outcome variable is very weak.

There’s a tool called SportVU that has been used to analyze players’ xyz coordinates. The researchers, Bocskocsky, Ezekowitz, and Stein, found that players were more likely to attempt difficult shots if they exceeded their expected shooting average over recent shots.

At some point – and we seem to be well past the fallacy fallacy so maybe this is it – perhaps we should just declare the hot hand to be a good example of an intractable problem.

I was once asked to analyze and summarize a series of very well characterized past events into a reasonable prediction of how the same sort of event was likely to play out in the future. What I found was that despite all the really impressive data available to me, all it told me was that each of the previous events was unique in multiple significant ways, to the point that I couldn’t really summarize anything.

The problem with the hot hand is that even to model it, you have to make major simplifications such as “considering only shots of some particular difficulty level.” In a game, each shot has a unique difficulty level, even if you are standing on the same spot.

Here is another way to look at it. Suppose a group of scientists wanted to predict wave height on a beach on this day a year from now. They could write a string of impressive papers, with loads of graphs and sober analyses of past years. But they might do just as well flipping a coin for higher or lower wave height than this year. The only difference between the “hot hand” literature and this wave height literature is that we humans are instantly able to recognize that the wave height team never had a chance.

+1. Another example is the fact that many time series forecasts are no better than the naive forecast (the last observation is the best predictor of the next observation). Of course, this doesn’t mean we shouldn’t try to improve the modeling. But sometimes the noise in the data and number of unobserved variables means that it is unrealistic to expect to get much of an answer from the analysis. I’m starting to think the hot hand is one such situation, though I am learning a lot from the way that people are trying to model it.

Matt:

I disagree with your claim that the hot hand is an intractable problem. I agree that it’s intractable if all you’re given is a sequence of 0’s and 1’s. But there’s a lot of other information that’s available, or could be available, including information on the difficulty of the shot (distance from basket, distance from nearest defender, height of defender) and on the quality of the shot (if it went in, how close it was to missing; if it was a miss, how close it was to going in). As with many statistical problems, the challenge is how to gather this relevant information and incorporate it into the analysis.

I’m in between Andrew and Matt here. I think the hot hand _may_ be an intractable problem.

As Andrew noted in the post, even the player may not know when he is _actually_ hot. Maybe he just feels hot sometimes — hey, I’ve made 4 consecutive shots, give me the rock, baby! — but is only _actually_ hot in some of these cases. You can control for whatever you want, you’re not going to somehow do away with genuine stochastic variability. Most pro basketball players even miss the occasional free throw!

You can try to make a model control for distance to basket, distance to defender, height of defender, speed of defender, jumping ability of defender, number of minutes the shooter has played in the game (and at what pace of play), and various parameters that relate to mental pressure (is it an important game, how big is the point differential and how many minutes left to play, etc.). You can do this and predict the success probability for each shot, conditional on the model, and then compare that to the actual successes, and conclude that there are time periods during which the player outperformed or underperformed the model, and you can define these as ‘hot’ and ‘cold’ periods. Maybe the sample sizes are large enough compared to the stochastic variability that this can work. But maybe not. Well, of course you can always do it — you can create a model, and you can define a ‘hot’ period as any sequence in which the number of successes is greater than predicted by the model — but how convincing will this be? There can always be additional variables you could have included (how noisy was the crowd? Did the player tweak his ankle a few minutes ago?…). If you include too much, you’re fitting noise; don’t include enough and you’re attributing successes to the ‘hot hand’ that are really due to external factors.

I think it’s possible for the hot hand to be real, and fairly big, but impossible to reliably quantify (or perhaps even detect) in any single player.

My weakly-held belief is that the ‘hot hand’ is a fairly small effect. I think there are psychological factors that lead to people tending to overestimate it, so we do. (Recall that the original paper demonstrated that people see a ‘hot hand’ effect in coin flips, where none exists). In the spirit of “no effect is zero” I do think the hot hand exists, but I don’t think it’s large. Given the realities of sports — a game only lasts so long, ditto for a season, and a player’s ability changes with time in ways that we would want to distinguish from the ‘hot hand’ — this may be a real effect that resists quantification.

> My weakly-held belief is that the ‘hot hand’ is a fairly small effect. I think there are psychological factors that lead to people tending to overestimate it, so we do

Yah.

I can’t follow all the statistical analyses and when I last read about this the “hot hand” was considered a fallacy. I went along with the notion that you could just rather simply evaluate whether a player’s success or failure with a previous shot was predictive of their likelihood (relative to their typical shooting %) of their next shot going in – and people found it wasn’t predictive.

I assumed it roughly followed with the arguments about “clutch hitters” or whether or not baseball teams on hot streaks would be more likely to win their next game than you could predict based on their winning %, the quality of their opponent, if they were home or in the road, who was pitching or injured, etc.

I don’t know what’s changed to debunk the “fallacy” and I probably couldn’t understand it anyway, but I am curious as to whether people have investigated whether the “hot hand” might be real effect for some players even as it isn’t a generalizable phenomenon, or have an interaction with whether the player was at home or in the road, or how long the hot streak had been going on?

At any rate, the bottom line for me is that I would also guess that the effect is small, and quite likely considerably smaller than players and fans generally think it is…

My “prior” is that this putative effect fits pretty nicely into the propensity towards confirmation bias.

> and a player’s ability changes with time in ways that we would want to distinguish from the ‘hot hand’

I think this is a really key issue here. Until we have a definition of what constitutes a hot-hand and what constitutes some other kind of time-varying skill, there’s no point in discussing the hot hand any further.

Skill varies with time. This is incontrovertibly true, I mean, while he writes some pretty great Sherlock Holmes fan fiction, I doubt Kareem Abdul-Jabbar at age 73 is anything like as good as he was at age 33.

So, what the hell is the hot hand anyway?

Daniel:

“Until we have a definition of . . . there’s no point in discussing . . .”: I disagree with this attitude! We go back and forth between data, discussion, analysis, and definition. It can be hard to start with a definition. Sometimes the definition becomes more clear after discussion. All sorts of problems in social science arise from premature definitions. For example, “the hot hand is serial correlation” or “risk aversion is a declining utility for money.”

One place researchers could start would be to define the outer limits of how strong the hot hand could be. Take Andrew’s example of a 50% shooter who is actually 40% half the time and 50% the other half. It would be very easy to prove that this model is not remotely true for 50% NBA shooters. You could find examples where such players made 5 consecutive shots, which should consist overwhelmingly of cases where the player was “hot.” But you would find that these players shoot much closer to 50% than 60% in these cases, thus ruling out Andrew’s hypothetical. If someone wants to take the time to do this work, I’m confident they will find that the maximum possible difference between “hot” and “cold” states is quite modest, and/or that any hot streaks that do exist are rare.

Best of all, we could then all go back to the correct understanding that the “hot hand” is mostly a cognitive error.

Correction: “Andrew’s example of a 50% shooter who is actually 40% half the time and 60% the other half.”

Guy

For your 5 shot in a row test there wouldn’t be enough data to answer the question for any given individual player, even if you were willing to ignore the quality of the defence, location, etc. Also, streak tests are typically used to get at the transient state hot hand. If the streak of 5 is layup in the first quarter, a 3 and a short range shot in the second quarter, and another layup and 3 in the 4th, you have an issue. Can’t really address transient hot hands with live ball data. With current measurements (and controls) live ball data is consistent with almost any belief want to have about the average effect. There are plenty of exceedingly unlikely individual incidents, and not just in the 3 point contest…

ps. the cognitive error view is not inconsistent with the idea that *some* players can have a sizeable hot hand, and the evidence everywhere you look when you can measure and control indicates that a hot hand exists. The only question is how far you are willing to generalise.

Andrew, sure, I am ok with a period of exploration. Now we’ve had that for a couple decades… it’s time to decide what we’re talking about.

We had a period of exploration of astrophysics for, oh, I dunno, somewhere between a few tens of thousands and a few million years, but we finally got it squared away.

The fact that there is no existing perfect explanation doesn’t mean none can exist. It’s plausible until there’s a clear refutation, which there isn’t.

This is more like deciding whether we are talking about stars or planets and what is the difference between them. No good going around with half the people saying stars don’t glow because they mean “rocky bodies that orbit glowing balls of gas” when they say “stars”

I thought to myself, “why am I reading this?” since I don’t care at all about hot hands. But learning that Kareem Abdul-Jabbar writes Sherlock Holmes novels made my day.

I actually think the novels Jabbar has written at 73 are better than the ones he wrote at 33

You should read them. The first one is a little rough, but they get better.

Phil said: “I think there are psychological factors that lead to people tending to overestimate it”.

My guess is that there is a fundamental psychological explanation *for* it, something like this: it’s a state of mind in which the interference of the conscious mind is reduced, and the knowledge that has been hard-wired into the subconscious mind by years and years of practice and synapse building is able to execute closer to it’s peak ability.

That explains how “confidence” comes into it. “Confidence” is just the quieting of the cacophony of doubts that rage through the conscious mind. Rather than “confidence”, it’s more just a state of calm, in which the patterns that have been created by years of training are allowed to be expressed unchecked by consciousness. The state of mind continues until unexpected events cause the conscious mind to generate doubts and interference, ending the “confidence” and the hot hand.

In sports the hot hand is measurable by shooting or passing or hitting or whatever, but to the players it’s about total play – especially in basketball, where the shot itself is just a culmination of many decisions and movements by multiple players.

The state of mind that generates the “hot hand” occurs outside of sports all the time, but it’s called different things, or maybe it doesn’t have a specific name. I’ve felt it in sports – playing hoops even as bad as I am – but mostly playing music. Most of the time I struggle to find the expression I want – but sometimes it just explodes out of me and every note is perfect, both emotionally and technically.

“it’s a state of mind in which the interference of the conscious mind is reduced, and the knowledge that has been hard-wired into the subconscious mind by years and years of practice and synapse building is able to execute closer to it’s peak ability.”

Perfect! That is exactly right IMO.

> (See here for a recent example.)

The recent example link doesn’t work (appears to be in the future).

Andrew wrote: “The difference, I think, is that in most problems of policy analysis and causal inference, the parameter to be estimated is clearly defined, or can be clearly defined. In the hot hand, we’re trying to estimate something latent.”

Your argument may hold for econometrics, but its premises are severely undermined in psychology. Social psych parameters, for example, also tend to be latent and elusive, much more so than the hot hand. The question “does making baskets predict making baskets” is a lot more manifest than “does power posing predict greater confidence?” Shouldn’t your argument for attenuation also hold for power posing, himmicanes, etc?

I think a much stronger explanation for overestimation in the social sciences involves how easy it is to redefine the parameter: researcher degrees of freedom. This is an argument you frequently raise, and it’s especially powerful for distinguishing the hot hand from other research, because there are so few researcher df’s for the hot hand. In contrast to power posing, the hot hand is fairly agnostic about theory, so its theory can’t be rewritten to support the observed effects; has no experimental design and no pilots since it’s observational; cannot be directly influenced by the researcher, who’s usually working with archival data; and is difficult to selectively analyze and report because there’s an overwhelming amount of data that’s publicly available.

Even if this post’s explanation is accurate, experimenter df’s are just too powerful to leave much unexplained.

Michael, my thoughts exactly.

Your comment illustrates Andrew’s combination of two features into one: the constructs of interest are latent vs. observed, and “the” measure is flexible vs. not. I agree with you that the latter is probably the bigger problem. It also illustrates Andrew’s very good points on the importance of measurement- if your best measures are bad, your effects will either be observed as small or, filtered through the publication process, noisy overestimates. How do we know which one? If there aren’t any acceptable alternative measures, you get underestimated (vs. true) effects. If the field is more accepting of other (but still bad) indicators, you’re more likely to get overestimates. Though this is not to say that bad measurement is a necessary or sufficient condition for bad research.

I’m not much of a statistician, but what we wrote on the hot hand back in the dark ages is contained at this link. My take is that the evidence against the hot hand is not very strong but our evidence in favor isnt that strong either. The paper to look at is called bowl4.

https://claremontmckenna.box.com/s/fmt39kixh8nd08ofag9lv5v41m1bqgh2

Forgive me, but I thought the hot hand paradox was related to subjective versus objective, and the inefficiency of transitions. That is, you are one of 5 players, and at various points, one of you is shooting really well. There is internal inefficiency: you think you’re hot but you’re not, or you think you’re cold but you’re not, or you think you can get going if you shoot or you think you’ll get going if you dont shoot for a while. And that internal inefficiency develops as you think: I’m hot from over here, so I could be hot around this area so I need to make the play here. Like when a player is repeatedly open for shot from the corner, where the defense can more easily be a step late because that is where the game meets the edge of the court. And externally, as noted, the other players have to allocate shots and plays for you to generate the most points, which may mean feeding you when you’re hot, but also may mean feeding someone else for an easy shot because the defense is keying on you more now.

So, in this version, you can have a hot hand, which everyone experiences, but there’s built in inefficiency because so many functions run through the space, some subjective or internal to you, others the subjective of others as they manifest as externalities toward you. As in, the goal is to maximize your team’s points, so if you assume competitive games where the outcome is iffy enough to make efficiency of offense and defense count, then you can achieve a certain output, which becomes the daily line because there is a relation to history which is at least measurable and projectable.

In this version, the hot hand is both real and vanishing, without requiring that defenses adjust, though that can be a factor. It tends to vanish if we treat the states as binary in simplification, and then use the functions that run over the space between as confounding, or as creating what I like to label a ‘complex real’, meaning a representation that contains multiple layers which resolve to an instance, meaning a traditional ‘real’ on a real axis, but which retain complications. Like a birth is an event which is felt for years after (as it was anticipated before), so the instance which is real is a the actual giving birth of an actual baby, but that baby’s physical existence is a tiny part of the complex real that it contains. That complexity then evolves over time, with chains of instances representing valuations of these functions.

I have a small, I hope, request. I post but I dont come back to read what I’ve written unless I fear I’ve made a mistake or somehow managed to be so insulting in my ignorance that someone objects. But this is different. I dont understand the issues with hot hands. First, that everyone experiences this in life is proof they exist. The issue is finding a model that explains them, which I presume motivates your post. Second, I think the issue is the way the problem is approached, which means the model which fits is hard to see.

Take something similarly deep, like Collatz: if you ask it from the perspective of ‘is there a number which …’, you can chase it, a la, the wondrous Terry Tao into the melodies that play and thus out into very large numbers, but you need to flip it over and ask: is there a structure which contradicts the question? That is the same as: is there a model which fits. The idea in that goes all the way back to antiquity because the proof of irrationality generalizes an abstraction to cover all instances. That hides what is to me at least an important issue, that this means a model exists which has these characteristics which absolutely apply in the specified circumstances (like flat projection to basic shapes). And it seems then clear that same model underlies every other application of the abstract to real instances, which means complex numbers, the unit circle, and eventually even various meanings of primes.

To me, the question about the hot hand is not whether it exists but that something which exists needs to be accepted as existing, so how does that fit? I’m not a big basketball fan, but I love soccer. In soccer, there are maybe a handful of possible scoring plays a game. Like in US football, except there because you require tackling and the goal is undefended, any play has a higher chance to score. A hot hand is no different than the longer expression of Arwen Robben’s left foot, meaning that as a left-footed player on the right wing, his essential opportunity was to cross to the left against defenders who had been trained to cover players who drive to the right to gain the edge (and who thus are physically skilled to be left backs versus right backs). As long as Robben had the speed to take the ball (and Bayern had the other players to get him the ball), then he had a useful shot. That is a key attribute of a hot hand: as you move from flipping fair coins to unfair coins, which you model as a segment of 1 to 0 and 0 to 1, which means you can treat it as extending from -1 to 1 to represent this over a complex field.

I’m saying that just like the proof of irrationality, the relative distance from random fair flips scales a model. I always feel at this point like I’m saying idiotic things, but one way to see this become a scalable model is to recognize that all these functions operate in what I think of as the ++ quadrant, using a version of standard notation, so the -1 actually exists in a space in which it is 0. You get the same result if you lay the segment on itself or if you treat it as having two endpoints. A simple rendition is a complete subgraph of 4 vertices where the intersection is a 5th point which contains the subgraph. The reason it’s ++ is at the end.

Another way of looking at this is to say that we have many ways of counting, that scales are examples of modularity, that existence is clearly modular because it scales, etc., and derive from this simple arrangement layers in which this occurs so the various systems of counting exist. I approach this – and again, excuse my idiocy – by generating binary labels and connecting those to make what I call Start, Between, and End, where we label S and E as 1 and B as 0. This says we have two things, each existing against a 0 and connected in some fashion to another thing, which might be itself in the future or past, however near or far. So, I tile using these complete subgraphs of a square. The vertices are 1’s and the holes are 0 and the intersection I call the bip because it’s the disappearing point where the relations rise or fall in layerings, and to me it goes bip because an instance can be very short indeed.

I’ve probably worn out whatever patience you have, but take a page of graph paper, and pick a spot and draw the x and y as lines of squares, and the z as the diagonal lines of squares. I like to label the lower left as S, and thus the side corners are to me the B endpoints because they form a 0-1, 1-0 segment across the secondary hypotenuse which I call the Bhyp. If you imagine the bip moving around, then you can see the complex field generating an instance. Those instances fit to a larger instance of complex real, like a baby has a foot but it’s still a baby if the foot is missing or turned in like mine.

One point should be clear: you are always counting across instances, which is the same as saying that the question about the hot hand isnt whether it exists but how. At the fundamental level, it exists because there is a layering of structure which is entirely made up of multiple crossings of segments which simplify to 0 and 1, 1 to 0, and which say that any existence has a complex process connecting it to even its own continuing existence, as well as to its past existence. We’re always counting at some level. And that is embedded because the basic building block is scalable square which tiles.

Since I doubt this is clear – or of interest at all – by counting the bips, you can count squares. So if you go out along the z, the count of squares is the same in x and y. I call those x,y,zK to indicate they’re iterative, like k of n, and because if you face K’s at each other, that draws the graph not of the lowest level square, but of the square of the count of the bips. This creates a tilted square which runs over the ‘bottom’ layer. One consequence is this defines the relation of 4 into 1 and 1 into 4 because any square at this bip layer is an instance of 4 squares of the base layer.

Once you realize that all operations take place in the ++ quadrant, a lot happens. At the basic counting level, this generates base 2 patterns, meaning halves of x,yK idealized to the diagonals of zK, which I call either 1 and 2 or SBE and Bhyp. SBE now translates to 3 squares: that is the expansion of 1 into the 0-1 bidirectional segment, the same for 0, and the same for 1. Note that since this all occurs in what I call the ‘positive image’ of ++, it’s trivial to generate the Leibniz expansion, and oscillation around the expansion. And, if you hold one 1 and apply the tiling across as 1 thread or perspective, then the value of that is e.

And it generates base 10 in a way I find absolutely marvelous: if you count the 3square that is SBE, then you get what I call SBE3, which is 9 squares with a center square. Unit circle again, but unit circle that moves along the zK, and in either direction because it’s always in the positive image. What is the bip of SBE3? 10. When you call a square 1, which it is as the complex layering within or on top of the label 1 at the bip, then you have made boxes that contain values, and those boxes have a special meaning at SBE3 because the box that contains it is the 10th box. So when you count 10, you are counting an 11th box of some kind at some level, including as just a line of squares. If you take the graph paper, draw a box around 9 squares, then draw a box around that. This shows what I label as (SBE3+1), which is more accurately (1+(SBE)+1) because that square can attach at either end, which generalizes to each iteration of either ends. So, for example, the count of 4 squares is 16 in either ‘hand’ or direction, but is 25 in both. As the pattern expands, you see SBE3 within a grid scaling and just 4 of those is 100. This isnt, I think, silly manipulation: the point is that 4 connects 2 of these things or Things as I call them to mean multilayered and thus multidimensional representations which can be rendered through these squares, which I call grid not graph squares because they function as grids and lattices on which you plot or move like on a chess board where the potential for each move at each moment is contained within the square, whether it has a piece on it or not.

I again assume I’m being moronic, but to at least try to be clear, this is the same as saying the 0 is SBE3 and that 0 attaches to S or E. This is a consequence of expanding 1-0-1 to squares; each square holds that expansion. This approach of model from the ground up has the advantage, I think, of explaining the role of 10, which is both to count 10 objects and to say that these are an object which can be treated as a unit 1 not just because any collection or counting can be treated as a unit, but because this version of 10 is the complexity that attaches to a count. This is a binary statement: there is 1 and there is whatever exists in the complex field which is not some other 1. So 10 is not just another base but is the base that expands the definition of 1 to 1 in a complex field counted as squares which are the expansion of 1 over a complex field to 1. I find that convincing, but my impression is the concept that the number system is a philosophical construct or a lucky guess is so deeply embedded that viewing 10 this way must be nuts.

I use these labels: the complete subgraph is the immediate context, where immediate is scalable and relationship determined and thus is countable using the simplest form of representation, and context because it embodies a variety of states that summarize to the bip. I call it IC and 16 is LC for larger context, because that makes 2 Things in a context. Immediate Context embodies 2 Things but in the sense of above, it can be either or, but if it’s both then it’s 16. This simplifies to the combined binary statements of 1 and 0, 0 and 1. You can see the matrices. So, you have layers of squares that take base 2 patterns, meaning they divide evenly over zK, and those take base 10 patterns, and these relate completely in this grid squares or gs model.

As an aside, assuming anyone is reading at this point, if you imagine zK as behind the x,yK grid squares, then you can see the imaginary axis meld with the yK. But first, if you take the squares and look only at the ++ quadrant, then you draw out along the x,y,zK counting with integers, except now you have built-in the inherent ambiguities of counting, that being what you label as Start or origin. (SBE3+1) adds up to 10 but if I write SBE3, that implies the preceding 1+. This locates Things in places within spaces, and that connects to how the squares read or count across, so a Thing becomes locatable in all the various ways we locate things.

Then you can count out along xK and up to zK along yK. Or the other way, which embodies that same handedness. This generates compound and prime numbers as expansions of the forms already created. That is they induce, and they induce strongly. You can see this on graph paper: a single square exists as the ++ of a quadrant, and that square inverts as the – – or lower left quadrant for the next ++ in the count. Each step, you count across the squares in x,yK, which is obviously all the odds, you can see 2 and 3 emerge as primes, with the halving reflecting over and around the quadrants, so it generates fractions – the rationals show up quickly as comparisons of countings around the quadrants. This matches entirely to the gs model at a conceptual level: the halves bip, meaning they divide or hinge or invert over an axis which is reduced to disappearing. Halves are the basic parity check. That is why average and median exist; they are the counts which hold the disappearance of a form of halving.

SBE is 3, so every multiple of 3 contains the existence statements of expanded 1-0-1 along with the process statements inherent in the expansion (and contraction). We covered 5: it’s both a literal count of 5 and it’s (1+(SBE)+1), meaning it embodies the expanded 1-0-1 attached at both ends. I hope you can see then how 4 relates: you take the 2 Things with a hinge somewhere in the count, but idealizing to 1 and 3 or 2 and 2 or 0 and 4, and you say those 2T relate at some place in that chain to another square so that relation has value at that counting level. That again is the bip where the bip is rendered as a square in zK, specifically what I call L5 is at zK3, which means it traces the edge of SBE3 in one direction of zK, which means it moves through complexity, and that idealizes to the x,y,zK axes.

Because gs embodies modular counting, the rest is fairly simple: there is now a parity check around the quadrants which verifies composition. Again, you can go around because this is all in the positive image space, which we can define using a number of approaches, the most obvious probably being a form of Hilbert space because it contains higher and lower dimensions. I prefer seeing these squares as representing the Leibniz series. I dont know if that needs to be spelled out: count one way, count the other, the oscillations flow around (which eventually requires a clockwise or left-handedness of rotation at a very low level), and this even generates the Start as the minus 1 of 3. The reason I love this is that alternation now generates organically, as the idealization of the processes of counting squares. You can see it in each quadrant: the alternation of sides when either can or could exist in balance or not. This is one of those areas where I have trouble: it’s inherent in many formulations – I go immediately to Jonny vN’s versions – but I dont see alternation built from the ground up, perhaps because it is the literal embodiment of a complex real taking form at each step over each step and those over k of n steps.

Since the action is all in ++, the – – is clearly invertible into ++. Now look at zK: at any prime, that zK square represents the count of squares that are modular to 1 and themselves only, as that develops over a complex field. The next step is mechanical: what is being counted is the length of complexity over a count of 1, which stacks the squares as yK, as the yK gs axis. Each xK square lines up as 1 through its bip because that is what abstracts to the level of continuation of the process of counting over the squares to the modular form of the prime count as 1s and itself. The bip is at real ½, when yK complexity is the horribly named imaginary. This is the approach from the other side: instead of looking at the zeta function, which extends the alternating function, the gs approach says here is a mechanical function that generates values, and these values represent these various ideas as they render, from concept to calculation. My approach, feeble as it is, is to work from the ground up. That is why I mentioned Collatz: the gs approach is to say that each zK count has embedded within it a moving chain of SBE, and that this moving chain always relates to halving and doubling. In one version, it counts 3 quadrants and +1 for the last. It clearly describes the base10 relationship of SBE and SBE3, and it repeats the same 1 square to 3 square growth of the very basic gs. That basic idea is ubiquitous, so I named it f1-3, where the dash indicates the 3 may come first, that the concept has inversions and isnt one way unless restricted. Since this is true everywhere, it is true everywhere. That says the question from one side can be pushed way the heck out, using creativity that is mind-blowing, but that ‘way the heck out’ means there must be a reductive form hidden within which can only be the form that extends over the entirety of the space.

This goes to what I see as a strength of gs; the space is always constrained. Because it is in ++, there is always an edge because there is always oscillation into and out of – -, and that imposes a variety of relative halts or other ways to fix a spot or count or idea. A nice thing is this consistently unravels the vagaries of normalizations and renormalizations: when every square is a grid square, the entirety becomes an expandable, compressible lattice in which layers relate. I mentioned e: you can see the division of the complexity accompanying a held 1 into infinite pieces that then combine as layers to that depth, which is exactly how e works.

And given this structure, you can see the development of Pi (sorry for not inserting the symbol) out of the Leibniz series, by which I mean it rather bluntly becomes the way round the quadrants as those smooth so the distance between the vertices of squares gets smaller. And that maps the complex journey to the SBE3+ version of 10 so complexity develop around the quadrants in both the ‘circle’ conception and when Pi is treated as a relation to e and so on. Since I’m likely unclear, and talking about this probably locates this deep in crackpot land, I mean that it’s obvious from looking at how the series counts that it connects a lot of complexity from and between stable values, and those values take iterations to settle in place, so there is an SBE3+ connection across the squares where S is a stable value, B is between, and E is a stable value. This draws out as a giant sheet of paper which reduces to a unit circle because that form fits. I think this appropriately treats circles and thus spheres, etc. as gs model constructs.

So, to end my crazy contribution of no worth, I want to note that a fundamental meaning of gs is that perspective has actual meaning or value. A thread is a set, etc. If you look at just one complete subgraph, there are a host of functions; from S around in either direction, with across on a diagonal available at each point. If E is S, you get more. If one of the B endpoints is S, you get more. If you include paths into other tiles, you rapidly generate graphs beyond comprehension. You must know the Erdos story about aliens demanding a Ramsey number for peace: if it’s 5, commit all resources to find, but if it’s 6, you cant so you’re going to war with them. The point is the abstraction has levels, and it may only be comprehensible at relatively close or low level, but those low levels where you can comprehend extend beyond your comprehension. I mention comprehension because you can generate axioms out of gs. From the above, you can see choice as coming from the model side, meaning from the other perspective of the question, and further that the CH is not asking the entire question: it is correct but gs generates relative existences and thus isolations or condensation of points out of a field, so CH is a general statement which categorizes forms but not an operative statement given relative limits imposed by gs. Those limits occur because, again, all the functions run in the ++ positive image, which means they occur within. That works out to inscribed and circumscribed, and thus how events which occur in one space – which I tend to call a Playspace – can occur in multiple iterations. Like we take for granted that trees make a forest, and tha there are trees, and that trees are woody plants, etc. when the idea is that multiple trees occur, that multiple stuff occurs like the kind and nature of a tree or forest, and that reflects a structure which allows replication and iteration. One neat thing is to see the line of bips as representing a counting number and the squares as representing the extent of that number as it approaches or recedes from that number. So one result is point that counts and ‘bubble’ that can’t be counted because the continuum is Between. The question beyond that is: when you have all these counts and continuums (or gateways in and out of countable and uncountable), then how do they shape? If there is any interest, I can explain how that works. It was difficult.

Anyway, thank you for the space for me to expose myself as a complete fool. I’ve been wanting to do this for a long time. I first though of this approach when I was 2, maybe 3 years old, and playing with some other little kids in our vast universe of 3 or 4 houses on a dead end street in Cincinnati. I became the object of the game, which in toddler fashion, consisted of hide the object, run around, find the object. It was hot and I ended up standing inside our garage with the door down. I could hear the other kids but I was inside. If I am the object of the game, then why is this no fun? If I am the object of the game, then how am I in here while the actual game is out there? How am I an object in a game? Where is the value if the value is in the game and not the object of the game? I know I was 2, maybe 3 because I crippled my hand in the winter and then we moved to Detroit. So my defense is that I’ve been nuts my entire life. Born to abstract would be on a t-shirt for me.

Thanks again for either giving me space or for deleting this as ridiculous. The reason it’s ++ positive image is that it occurs within an x,y,zK grid no matter what so any direction chosen is ++. The gs model is direction agnostic, which is necessary because then it can polarize and can generate tension across a count so this over here carries over to there without imposing direction on any piece or part. When you impart tension across attributes – which again, count at levels ‘away and above’ the base grid squares, then you get effects that occur within limits, including classical information limits. That is a dense topic.

I doubt I’ll be posting much here again, so I want to thank you for your blog. It’s been fascinating to see you work through ideas. I sometimes post just so I can feel connected to the ideas, not because I think I have something to say. This is pretty much the only blog I look at regularly, mostly because you deal with organization of uncertainty and your own existence in that, as that connects to ‘beliefs’.

So what then does it mean for the hot hand hypothesis to be true? Surely it can’t be the trivial claim that we can better model a player with p changing over time during a game since, of course, we will find a better fit. Is it merely that fit should be better than what one might expect by chance? I don’t think so since the mere fact that players get tired over a game will make that true.

Peter:

I should let Miller and Sanjurjo answer this one, but let me take a shot: The hot hand hypothesis is true if there a player has large variations in shooting ability, and if he has some sense when this is happening. If you can get hot but have no idea it’s happening, then that’s not really the “hot hand” as we understand it, and if getting hot implies only a 2% increase in success probability (for a shot of equal difficulty) then that’s not much of a hot hand either.

The other question you bring up is how to distinguish the hot hand from the absence of a cold hand. I agree that “hot hand” implies not just variation in ability but also the idea that sometimes you’re doing better than your usual level. Indeed, I could imagine that some players who don’t get “hot” because they’re always focused. They can get “cold” due to being tired, injured, or simply getting outplayed by the other team; but “hot” is not just the absence of “cold.”

I think these sorts of discussions can be helpful in exploring questions of definitions and measurement.

I’m a little uncertain how a researcher would separate the hot hand of an individual from some sort of team effect. A certain pattern of play over a short period could exploit a weakness in a defense meaning that someone made shots. The person might not have any sort of hot hand but the team might make it look that way. With team sports the performance of the individual surely has to be put in the context of the team – the indidvidual does not get the opportunity to have moments of brilliance without the team making that possible.

Free throw data may provide some insight.

From https://www.researchgate.net/publication/46554967_Revisiting_the_Hot_Hand_Theory_with_Free_Throw_Data_in_a_Multivariate_Framework :

“Given the heterogeneous nature of field goals and several potential sources that could cause a positive or negative correlation between consecutive shots (such as having a weak defender), free throws may provide for a more controlled setting to test for the hot hand. “

I think free throws isn’t a particularly parallel test scenario.

After the first free throw, the shooter kind of gets a feel, a rhythm for what is odwrry much the exact same 2nd shot.

Shots in real time during the course of a game have much more variability, one shot to the next, imo. Seems to me that makes it a very different kind of activity.

Do you think it is possible for a player to be “in the zone”? I’ve heard this from musicians, athletes and a number of other professionals where their performance improves for some bit of time. Using free throws from predetermined positions could possibly remove the variability of a real time game and still test for a hot hand.

Some of the discussion here is missing a key point:

The “hot hand” is a physical/mental phenomenon that a player can feel. From a player’s perspective, there’s no question about whether or not it exists. Players know it does.

So the question for statisticians is how/if it can be detected, not whether it exists or not. I guess you can talk about whether it exists “statistically”; but in the game it’s real.

> The “hot hand” is a physical/mental phenomenon that a player can feel. From a player’s perspective, there’s no question about whether or not it exists. Players know it does. So the question for statisticians is how/if it can be detected, not whether it exists or not.

You could say the same about precognition or telekinesis.

Except that there’s ample room for the “hot hand” to hide in background noise, where at first glance it’s pretty easy to measure the movement of an object, or to ask someone to write down their precognition as soon as they have it, rather than claim the experienced it after the fact.

Also I offered a plausible explanation for the hot hand above, where it’s much harder to generate a plausible explanation for the phenomena you suggested.

I suspect the hot hand is real but the discussion of causality for it is misleading. Basketball as an example is a tactical sport. A defense might block drives from the left hand of a wing and in the process consistently open up a shooter on the right wing, who then starts making a lot of shots. This would appear as a hot hand, but really it’s a mismatch being exploited until the defense changes its strategy. The offensive player may even be unaware that’s what he’s doing, unless he’s a particularly cerebral. He may just react to advantages instinctively.

We know this happens. It’s constantly discussed by sports strategists. So we ought to account for it before considering a endogenous player-specific variation in luck, for which we have no causal mechanism.