Changes since the 1970s (ESP edition)

In 1979, the computer scientist Douglas Hofstadter published the book Godel Escher Bach. I’ve always thought this book to be overrated, but maybe that judgment on my part wasn’t fair. Godel Escher Bach is a long book, and it has some mistakes, but it also has lots of solid, thoughtful passages that stand up impressively well, 40 years later. Not many books of speculative science and engineering would look so reasonable after so many years.

At one point in the book, Hofstadter alludes to Alan Turing’s notorious belief in extra-sensory perception:

My [Hofstadter’s] own point of view—contrary to Turing’s—is that ESP does not exist. Turing was reluctant to accept the idea that ESP is real, but did so nonetheless, being compelled by his outstanding scientific integrity to accept the consequences of what he viewed as powerful statistical evidence in favor of ESP. I disagree, though I consider it an exceedingly complex and fascinating question.

The Turing thing we’ve already discussed: the statistical evidence that he thought existed, didn’t. The 1940s was a simpler era and people trusted what seemed to be legitimate scientific reports. Can’t hold it against him that he wasn’t sufficiently skeptical. This should just make us think harder about what are the accepted ideas that we hold without reflection nowadays. If Turing can make such misjudgments regarding statistical evidence, surely we are doing so too, all the time.

What I want to focus on is the last bit of the above quote, Hofstadter’s statement that the question of ESP is “exceedingly complex and fascinating.”

That’s a funny thing thing to read, because I don’t think the question of ESP is complex, nor do I think it fascinating. ESP is an intuitively appealing idea, easy to imagine, but for which there has never been offered any serious scientific theory or evidence. To the extent that there is “an exceedingly complex and fascinating question” here, it’s not about the existence or purported evidence for ESP, bur rather it’s the question of how it is that so many people believe in it, just as so many people believe in ghosts, astrology, unicorns, fairies, mermaids, etc. OK, there aren’t so many believers anymore in unicorns, fairies, and mermaids, what with the lack of any direct corporeal evidence of these creatures. ESP, ghosts, and astrology are easier to believe in because any evidence would be indirect.

Anyway, here’s my guess of what was going on with Hofstadter’s statement that he considered ESP to be “an exceedingly complex and fascinating question.” Back in the 1970s, ESP seemed to be a live issue. Even though the most prominent ESP promoter was the ridiculous Uri Geller, there was some sense that ESP was an important topic. I’m not quite sure why, but I guess public understanding of science was less sophisticated back then, to the extent that even a computer science professor who didn’t even think ESP existed could still think that it’s an interesting question.

Things have changed since the 1970s. You can study ESP if you want, but it’s no longer in the conversation, and there’s no sense that we have to show respect for the idea.

Trends in science are interesting to me. We’re no longer talking much about ESP—in retrospect, that notorious paper published in the Journal of Personality and Social Psychology in 2010 and featured in major media was not a breakthrough but rather the last gasp of real interest in the topic—but I have a horrible feeling that the sort of schoolyard evolutionary biology exemplified by the “Big and Tall Parents Have More Sons,” “Violent Men Have More Sons,” “Engineers Have More Sons, Nurses Have More Daughters,” and “Beautiful Parents Have More Daughters” papers remains in the mix. And, as long as Ted talks are around, I don’t think embodied cognition social priming and other purported mind hacks will be going away any time soon.

Here I’m not just talking about ideas that have mass popularity. I’m talking about ideas that might well have mass popularity, but what’s relevant here is that they are respected among educated people.

As we keep reminding you, surveys tell us that 30% of Americans believe in ghosts. I’m guessing it was around the same percentage back in the 1970s, at which time I doubt that Douglas Hofstadter believed in ghosts any more than he believed in ESP. The difference is that he respected ESP; I doubt he respected ghosts. ESP was science—possibly mistaken science, but still exceedingly complex and fascinating. Ghosts were a throwback.

Similarly, I expect that schoolyard gender essentialism will always be with us. What’s notable about some of the more ridiculous evolutionary-psychology elaborations of these ideas is that they have elite support—or, at least they did back in 2007 when they were featured uncritically in Freakonomics. I haven’t heard much about these theories recently but I guess they’re still out there. Embodied cognition is in some intermediate stage, similar to ESP in the 1970s in that it’s been shot down by the experts but still exists in the Ted/NPR/airport-business-book universe.

One difference between ESP and the popular pseudoscience of today is that ESP, as generally understood, represents a pretty specific series of assertions that can be disproved to within reasonable accuracy. No, there’s no evidence that people can read your mind, or that you can transmit thoughts to people without sensory signals, or that anyone can predict random numbers, or whatever. In contrast, evolutionary psychology and embodied cognition are broad ideas which indeed are correct in some aspects; the controversy arises from overly general applications of these concepts.

OK, we could go on and on—I guess I already have! The main point of this post was that times have changed. Back in the 1970s it was considered sensible to take ESP seriously, even if you didn’t believe it. Now you don’t have to show ESP that kind of respect. New areas of confusion have taken over. As someone who was around in the 70s myself, I feel a little bit of nostalgia around ESP. It’s pleasantly retro, bringing back images of big smelly cars with bench seats.

55 thoughts on “Changes since the 1970s (ESP edition)

    • Opher:

      Oh, I was just thinking of things like the elderly-priming-and-slow-walking study that elicited the since-retracted statement by Kahneman that “You have no choice but to accept that the major conclusions of these studies are true,” which elicited the memorable response from Wagenmakers a couple years later that “disbelief does in fact remain an option.”

      • I think it would be worth the mental effort involved to attach a different label to the research you mean when discussing those findings. The term ’embodied cognition’ refers to a heterogenous constellation of research projects united by the view that the physical body is highly significant to cognitive capabilities (see here: https://plato.stanford.edu/entries/embodied-cognition/). While some weak priming studies may have framed their findings in terms of embodied cognition, the fact that those studies were terrible ought not to be taken as more than a modicum of evidence either for or against the value of the approach to cognition as a whole.

        I get that it is a hassle to unlearn these associations, but I really think it would be worthwhile. For readers with a cognitive science background the misuse of the term is a bit jarring and for those without that background, it is actively misleading.

  1. In science, “real” is effectively defined as “occurring with sufficient frequency and reliability as to be empirically verifiable.” There are surely infinitely many phenomena that are literally real but not “scientifically” real. Most, of course, represent trivial questions–say, whether some unique particle occasionally blinks in and out of existence without interacting with anything; or, how many times Washington sneezed while crossing the Delaware. Theories about such questions are rightly called unscientific, because they are unfalsifiable.

    It’s tempting to believe that all phenomena that are literally but not scientifically real must be trivial. Tempting, and unfalsifiable. Scientists who think they can prove unfalsifiable theories, and scientists who think unfalsifiability disproves theories, make the identical error.

  2. > In contrast, evolutionary psychology and embodied cognition are broad ideas which are indeed correct in some aspects […] The main point of this post was that times have changed.

    If I understand you correctly, I don’t think I agree about some kind of distinct difference. I think that belief in ESP, and people’s belief that they can broadly suss out the mechanisms of evolution to reverse engineer behaviors we see in today’s society, are very much akin to one another.

    I think of Brett Weinstein and Heather Heyer, who podcast to massive audiences preaching how they’d figured out evo psych, and who also produce dozens of podcasts with content like explaining how the military mandate for vaccines is attributable to a desire to handicap the military. There’s a linkage, imo, between such bastardization of science and batshit crazy conspiracy beliefs, and I think that like ESP it all books down to a very human tendency to see patterns in random sequences of events.

    I’ve been thinking lately about sports, and the overwhelming aspect by which random events become assigned narratives (teams that lose badly “came out flat, ” or even that certain players are “injury prone” – there are so many)… The tendency to assign narratives to random events is enormous and I don’t tho k it’s something that really changes over time.

    • Given a person’s body dimensions, mechanics, and a certain repetitive motion undertaken in their sport, it is possible to be more injury prone than another person…. think about the function of orthotics, for example.

      • Sure – it’s possible. It’s not ALWAYS meaningless. For example when an athlete repeatedly has the same or similar injuries with a similar etiology

        But how do you know when it’s really physiologically based, and when it’s just a post hoc narrative to explain a basically random series of events?

        My point is that for most sports narratives, it’s almost impossible to tell what’s an invented narrative and what’s a statistically- or scientifically-based explanation. I’ve been thinking about this for quite a while, since I first heard that there’s really no such thing as a “clutch” hitter in baseball. I don’t buy that completely (maybe it’s true in general but not all cases not the least because of how defenses adjust) but more and more I think scepticism is merited.

        • And just to add, it’s a HUGE business based on that tendency, none the least with the explosion in sports betting.

          Not to day u don’t enjoy sports or indulge in the narrative-inventing. It’s fun. But it’s kind of nuts at the same time.

  3. Hofstadter is still alive, I believe. Maybe we could ask him directly. My feeling is that GEB was written before our era of snark as a supreme value, and maybe he was being deferential to someone, Turing, who deserved a great deal of deference. This means that when we see a great person’s mistakes we don’t jump up and down and beat our chest.

    • Oncodoc:

      Nobody here is jump up and down and beating our chest. My issue with Hofstadter was not his politeness (referring to Turing’s “outstanding scientific integrity”) but with his statement that ESP, or the scientific evidence regarding ESP, is an “exceedingly complex and fascinating question.” I agree that the question is not trivial, but I wouldn’t call it exceedingly complex or fascinating, either.

      And I don’t consider it a sign of disrespect to look at how Turing got this one wrong. As I wrote in my earlier post: When stupid people make a mistake, that’s no big deal. But when brilliant people make a mistake, it’s worth noting.

  4. All around us, there’s radio, wifi, bluetooth, etc transmitting huge amounts of info invisibly to/from devices that only require a few watts.

    And supposedly not a single lifeform has evolved to harness this ability? In the absence of an explanation for why there is no “ESP”, I find that completely implausible. There is almost certainly such communication going on, but probably highly compressed/encrypted so it is called “background noise”.

    • Anon:

      As I wrote in the earlier post, my problem with Turing’s statement is not that he thought ESP was possible or even that he felt strongly that it existed. My problem was his statement that the “the statistical evidence, at least for telepathy, is overwhelming.”

      • It seens Turing was referring to JB Rhine. Harold Gulliksen (who was one of the first people publishing against NHST, btw) wrote a review in 1938:

        In the course of ten to a hundred-thousand trials slight but consistent errors in recording and unnoticed sensory cues may well give a deviation from chance expectation which, while small, will on statistical analysis be “significant”

        https://www.jstor.org/stable/2768488

        It also sounds like there was sampling to a foregone conclusion going on (the IID assumption was violated). So there very well could be overwhelming statistical evidence against the chance guessing model.

        Looks like Turing fell for the standard “reject a strawman, then conclude your favorite theory is true” fallacy.

        • Anon:

          So I’ve read, there was flat-out fraud in these experiments as well as essentially unlimited researcher degrees of freedom. It’s sad that Turing fell for this, especially given that his area of expertise was the evaluation of evidence! But nobody’s an expert in everything, and everyone makes mistakes. And Turing was a product of his time: there was a lot of science fiction floating around, lots of progress had recently been made in physics, and Freud and Marx were still culture heroes. Put that all together and you have an environment in which people were willing to believe that anything was possible. Actually, not just that anything was possible, but that all sorts of things had already been demonstrated. Turing had less common sense on this particular issue than did Martin Gardner, but . . . the vast majority of people had less common sense than Martin Gardner. And Turing had lots of other things to offer the world. Indeed, his attitude that anything was possible might have been a key attribute that motivated him to put in all that effort to crack the Enigma code. A more commonsensical person might have just given up.

          For Hofstadter to say that ESP evidence is an “exceedingly complex and fascinating question” . . . well, I think Hofstadter was a product of his time too. The 1970s were an era where lots of these goofy ideas like spoon-bending, Bermuda triangle, etc., were in circulation.

        • Turing was more of a theoretician, I don’t think he did much actual data collection or experimentation. So it doesn’t surprise me.

          I’ve noticed, in physics particularly, there are many with deep understanding of the theory but very little about the limitations and pitfalls of the process comparing to observation.

          Look at the Bell tests, analyzing these experiments is very similar to looking for ESP actually. It is near impossible to close all the “loopholes”, and they are interpreted via NHST.

          The actual results are halfway in between (S ~ 2.4) the predictions of classical (S ≤ 2) and quantum models ( S = 2*sqrt(2) ~ 2.8 ). Thus is used to reject classical, but the deviation from quantum mechanics is chalked up to experimental error. They fudge the results, and it seems the theorists are unaware or unconcerned.

          Meanwhile there are other models that predict the observed value, we just need to drop the assumption particles behave like markov chains, ie there is a “memory”:

          “We demonstrate that, under certain conditions, the Bell inequality is violated owing to the wave-mediated coupling between the two subsystems. Our system represents a new platform for exploring whether Bell’s Theorem, typically taken to be a no-go theorem for all local hidden variable theories, need be respected by the class of hidden variable theories based on non-Markovian pilot-wave dynamics.”

          […]

          In conclusion, we have devised a platform for performing static Bell tests on a classical bipartite pilot-wave system. The maximum violation was found to be 2.49±0.04, and arose when the system geometries were chosen such that the droplet motion was marked by strongly synchronized tunneling for one measurement setting combination, moderate and weak synchronization for the others.

          https://arxiv.org/abs/2208.08940

        • 1. It’s true that it’s hard to close all the loopholes in a Bell test. However, the numbers you are quoting are *not* predictions, they are lower and upper bounds for entire classes of models. All models of a vaguely specified two particle system with local realism must predict less than or equal to 2, and all quantum mechanical models of a vaguely specified two particle system must predict less than or equal to 2sqrt2. The proof of the latter bound is extremely abstract and uses only the algebraic properties of non-commutative hermitian operators. That is, *you would not expect the upper bound to hold exactly in a real system* and the observed value being less than 2 sqrt2 is exactly what you should expect since satisficing the upper bound is a corner case. That is not “experimental error.”
          2. You’re not understanding that article you linked at all. Quantum mechanics already admits an interpretation where particles are guided by a “hidden variable” pilot wave, it’s called Bohmian mechanics. It is explicitly a non-local theory. The appeal of the walking droplet example is that it reproduces the mathematics of a quantum mechanical system despite not actually being one. In this case, it is producing “non-local” behavior through interaction with the pilot waves, but the pilot waves propagate at a finite speed while their quantum mechanical analogs are truly instantaneous. In this real system, you can actually outrun the waves to observe the other particle before its collapse.

          We note that, in our system, the speed of communication between the two subsystems is bounded by the fastest capillary wave speed (approximately 20 cm/s)

          It’s just a toy model.

        • The proof of the latter bound is extremely abstract and uses only the algebraic properties of non-commutative hermitian operators. That is, *you would not expect the upper bound to hold exactly in a real system* and the observed value being less than 2 sqrt2 is exactly what you should expect since satisficing the upper bound is a corner case. That is not “experimental error.”

          Source?

          The impressive violation of inequalities (2) is 83% of the maximum violation predicted by quantum mechanics with ideal polarizers (the largest violation of generalized Bell’ s inequalities previously reported was 55% of the predicted violation in the ideal case’).

          https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.49.91

          Ie, they came up with a post-hoc fudge for why the prediction was off. If you read later papers, this is not even bothered with any more.

          If someone did this for ESP, would you accept it?

        • In this case, it is producing “non-local” behavior through interaction with the pilot waves, but the pilot waves propagate at a finite speed while their quantum mechanical analogs are truly instantaneous.

          Do you agree the bell test assumes a particle cannot be influenced by its own past behavior and/or that of other particles? Ie they do not leave any kind of “wake” at all.

        • Origin of the 2 sqrt2 upper bound for quantum mechanical models

          https://www.tau.ac.il/~tsirel/download/qbell80.pdf

          Original proof of Bell’s upper bound of 2 for locally real theories

          https://cds.cern.ch/record/111654/files/vol1p195-200_001.pdf

          In neither case are they actually *building* a model of the system. They are proving upper bounds for ANY 2 particle system with orthogonal binary states under ANY theory that satisfies certain criteria. In Bell’s case, it is theories which are locally real. In Tsirelson’s, it is theories where observable measurements are modeled by hermitian operators. These were never precise predictions about the mean/median/modal outcome of any particular experiment; hence, no, the deviations are not fudge factors.

          Locally real hidden variable theories are NOT classical theories. As they are defined in Bell’s theorem, they CAN possess memory in the form of some latent variable. Bell does not assume anything about what the latent variable is, what it represents, how it comes about, just that it’s there and acts locally. In the statement of equation 2

          Let this more complete specification be effected by means of parameters A. It is a matter of indiffer- ence in the following whether A. denotes a single variable or a set, or even a set of functions, and whether the variables are discrete or continuous.

          So no, you are exactly wrong there.

        • Did you even read the paper you posted? They do model their specific experimental setup in equation 5 and the curve matches their data almost exactly in figure 3

        • They do model their specific experimental setup in equation 5

          Yes, thats the post-hoc fudge they say is required due to non-ideal experimental conditions. It is exactly why I shared that paper. Nothing wrong with that, except it is post-hoc. As I said, recent papers don’t reuse that model to check it. They repeatedly observe results around 2.4 though, despite using a wide variety of experimental setups.

          What has been going on is they reject all models predicting S < 2, then conclude their theory is correct. The walking droplets shows a local, real, deterministic, and causal model can yield S >2. In fact it yields S ~ 2.4, as has been observed.

        • @Anon

          You are now just flatly repeating wrong things, clearly without reading either my comments or the papers you are linking. The 2.7 \approx 2.69 number in equation 5 is the theoretical prediction for the experimental setup. It is not a “fudge factor” from 2 Sqrt 2 because 2 sqrt 2 was never the prediction, it is the theoretical UPPER BOUND for ANY system of 2 entangled particles, not a prediction for THIS system. This is stated clearly in the paper. In the intro describing the motivating thought experiment it states “On the other hand, for suitable sets of orientations, 4 the quantum mechanical predictions CAN REACH the values S=~2~2”. In the methods section, describing the actual experimental setup, it states “With symmetrical polarimeters, quantum mechanics predicts Eq 5 = 2.70”.

          Your walking droplet experiments are non local. From the paper you linked:

          An assumption made in the derivation of Bell’s inequality that is not satisfied by our classical, hydrodynamic system is that of ‘Bell locality’. Specifically, the long range influence and persistence of the pilot wave ensures that both droplets are influenced by the entire domain geometry, specifically both measurement settings.

          In reality, of course it is local. The pilot wave travels at finite speed. But if you look at it that way, the whole experiment falls apart because you can actually observe both particles simultaneously, and one collapses before the other. The whole point is that it “looks” like a quantum system because the math matches up with quantum math and “looks” like it’s non-local because the waves travel so much faster than the droplets. The walking droplet pilot waves are not a fundamental theory that preserve locality, because in quantum mechanics the math and predictions are the same but the wavefunctions are non-local.

          Are you just skimming wikipedia articles and abstracts? You need to get into a little bit more depth to understand what you’re talking about.

        • The 2.7 \approx 2.69 number in equation 5 is the theoretical prediction for the experimental setup.

          A *prediction* means this equation was published before the experiment. Can you point out where?

          Was the same equation used to predict any future results?

          If so, then it is now a successful prediction. But when I check recent papers, they do not use this equation at all. You would think a precise fit of theory with observation is something to show off.

        • No, the equation was published simultaneously with the paper. The prediction is closely coupled to the physical experiment design which is of no fundamental significance, so the only reason that would be done is for pre-registration which was not fashionable back then (or now).

          I’m certain that they didn’t go in expecting the theoretical upper bound for ideal qubits at 2 Sqrt 2. If you are suggesting that they went through with the experiment, didn’t see what they expect, then came up with a model that matches, it’s possible but dubious. Their experimental setup is quite simple, and the derivation of that expression is extremely straightforward without much to fudge. Any weirdness in the form of the expression would be immediately obvious. The only real degree of freedom is the reflectances for their polarizers, which you can measure independently. It’s possible that they didn’t actually measure those reflectances and fit them to the data, but that would be a pretty wild thing to do.

          It’s true that people don’t usually derive a specific prediction for their Bell inequality experiment. It’s important to understand the historical context here; classical particles were already thoroughly falsified when Bell’s theorem came on the scene and nobody, Einstein included, was defending them as a fundamental theory. The theorem and tests are motivated by the EPR debate, and only have the goal of resolving between the class of locally real and non-local/non-real theories. They are not falsifying classical theory, therefore confirming quantum theory. They are falsifying all potential theories involving deterministic local hidden variables. Nobody is going in expecting to satisfice the theoretical upper bound of 2 sqrt 2, though it has become somewhat of a physicist challenge to see how close one can get.

        • The prediction is closely coupled to the physical experiment design which is of no fundamental significance, so the only reason that would be done is for pre-registration which was not fashionable back then (or now).

          Comparing the predictions of QM to the data is of utmost significance. That is how you tell if the theory reflects reality.

          And it isn’t hard to come up with a plausible derivation to explain anything after the fact. Remember this?

          After months of rumours, speculation and some 500 papers posted to the arXiv in an attempt to explain it, the ATLAS and CMS collaborations have confirmed that the small excess of diphoton events, or “bump”, at 750 GeV detected in their preliminary data is a mere statistical fluctuation that has disappeared in the light of more data.

          https://physicsworld.com/a/and-so-to-bed-for-the-750-gev-bump/

          If you adjust your model after seeing the data, that is a post-hoc fudge. You need to then compare the predictions of the same model to new data. Science 101.

          In reality, of course it is local.

          Yes. There is nothing paradoxical or strange about a boat being influenced by its own wake, or that of other boats that passed through earlier. It is a completely mundane phenomenon. That is not what people are thinking when using the term “non-local”.

        • somebody:

          It’s not clear to me whether Bohm’s mechanics has to be “instantaneous” or if for example the wave function might propagate with simply an enormous speed, for example 10^10 times the speed of light. These would be difficult to distinguish in any real experiment I think.

        • > the pilot waves propagate at a finite speed while their quantum mechanical analogs are truly instantaneous.

          The wave functions in quantum mechanics are not even physical things propagating in physical space. They exist in configuration space. For a single (and zero-spin) particle in the position representation the configuration space may look like a spatial description but that doesn’t automatically give the wave function a physical interpretation. For a system of two particles the configuration space has 6 dimensions – the idea that the wave function describing the system can be something that propagates through space is completely shattered.

        • > the wave function might propagate with simply an enormous speed

          The wave function is defined “everywhere”. What do you mean by “propagate”? What would go where with an enormous speed?

        • @Anon

          I’m not disagreeing that physicists use null hypothesis significance testing and it has a bad effect on the quality of their science, or that modern theorists in physics don’t understand experimentation very well. I’m disagreeing with your characterization of the Bell experiments because it’s flatly incorrect. They were not testing QM, they were testing all locally real theories, which were by no means a straw-man at the time. That local realism is dead is a mathematical certainty. If you can’t follow Bell’s argument, I’m afraid I can’t help you much there, you’ll have to take an introductory class on probability theory because that’s really the only thing the the derivation uses.

          @everyone else

          We are going down an even more boring rabbit hole

        • Carlos:

          I can formulate the Newtonian dynamics of 10 asteroids orbiting each other via gravitation in terms of a Lagrangian formulation in which the potential energy is a function of “configuration space”. Nevertheless the acceleration of the individual asteroids depends on the derivative of the lagrangian with respect to that asteroids location. Bohms mechanics is the same way, the momentum (rather than acceleration) of the individual particles depends on the derivative of the wave function on configuration space with respect to that particle’s position.

          I’m no expert on Bohmian mechanics so I’m relying on https://plato.stanford.edu/entries/qm-bohm/#DefiEquaBohmMech

          Despite the fact that Newtonian gravitation relies on a potential energy which is in “configuration space” we can formulate a version of it in which there is a “gravitational potential field” which is a function of space.

          That’s more or less what I’m wondering about. Is it possible or impossible to formulate a (variation perhaps) of Bohmian mechanics in which there is a “quantum field” whose value at position x is influenced by stuff that happens at position y at a time retarded by dist(x,y)/Cq where Cq is a “propagation velocity” which is very much larger than the speed of light.

        • Sure, maybe standard QM – including Bohm’s version of it – is not true. Then we would be discussing something else. According to that link:

          Bohmian mechanics is manifestly nonlocal. […] It should be emphasized that the nonlocality of Bohmian mechanics derives solely from the nonlocality […] built into the structure of standard quantum theory. [..] As Bell has stressed, “That the guiding wave, in the general case, propagates not in ordinary three-space but in a multidimensional-configuration space is the origin of the notorious “nonlocality” of quantum mechanics. It is a merit of the de Broglie-Bohm version to bring this out so explicitly that it cannot be ignored.”

          [I don’t think this subject is boring but it’s extremely off-topic so I’l leave it here.]

        • Sure technically it would be “wrong” but it would be not the kind of wrong that we think of normally. Like technically Newton’s mechanics for asteroid orbiting each other is wrong, but at the speed of light if the asteroids are all within a couple km of each other then the retardation of gravity is on the order of a couple microseconds and if the velocities are all a few km/s or less then the errors in position are all on the order of millimeters for objects that are km across… In other words Newton’s gravity is an asymptotic theory for where the speed of gravity is “infinite”.

          So for my idea instead of psi(x1,x2,x3) for a 3 particle system would be replaced for the purposes of calculating x1’s momentum by psi(x1(t),t) (the value of a field) and the field value at particle x1’s position at time t would be informed by what was going on with x2 and x3 at times t-dist(x1,x2)/Cq and t-dist(x1,x3)/Cq. If Cq is say 10^40 m/s then even if the distance is 10 light years the retardation is 1e-24 seconds so it looks like everything depends instantaneously on everything else. Hence the Psi appears to be “on configuration space” because instantaneously it depends on all the positions.

          In other words, Bohmian mechanics would be an asymptotic theory for an underlying dynamics that is “so fast” as to be effectively instantaneous for all experimental regimes tried so far. The interesting thing about such a theory is that perhaps one could probe the effective speed by doing experiments with detectors very far apart. With QM as is you can’t make predictions for the effect of distance on behavior, as there is no such effect.

        • In other words, Bohmian mechanics would be an asymptotic theory for an underlying dynamics that is “so fast” as to be effectively instantaneous for all experimental regimes tried so far.

          This isn’t required though. There just needs to be a common influence. And there is: for superdeterminism that the collective history of the universe back to the big bang.

          But that is only an extreme example that requires dropping “free will”. Basically the assumptions of independence are false for an entire class of models where there is a shared “background”.

          As stated in Ref. [6], MI can only be imposed, and therefore the freedom-of-choice loophole closed, under the assumption ‘that λ is created with the particles to be measured’. In the generic model of Section IV the λ do not only describe (particle) properties at emission, but also a resonant background in the spacetime neighborhood of the detection events. Of course, one could say that such background-based or pilot-wave HVTs rely on another type of nonlocality, or rather holism, since they invoke a delocalized field. But the difference is clear: this is physics as usual, and not spooky-action-at-a-distance. In the intuitively simplest picture, directly inspired by the droplet-systems, entanglement and apparent nonlocality can arise in such theories when particles coherently move in resonance with a periodic wave (or, by extension, when they are singularities in such a field, cf. Section VI.B).

          https://arxiv.org/abs/1701.08194

        • @Anon

          Yes, if there is a mutual cause that determines not just the behavior of both particles but also the observer’s choice of which angle to measure, then such a theory can local and admissible. There is not such a theory that is compatible with quantum experiments, doesn’t have an observer effect, and which makes specific predictions, that physicists have missed because of NHST.

        • Daniel:

          > The interesting thing about such a theory is that perhaps one could probe
          > the effective speed by doing experiments with detectors very far apart.

          It would certainly be interesting if an experiment could do that. Do you want such a theory to be Lorentz invariant, or do you think that maybe Lorentz invariance does not hold in general?

          As soon as locality is violated, observers can disagree as to which of two events is the cause and which the effect.

        • There is not such a theory that is compatible with quantum experiments

          As already discussed, it isn’t clear to me that QM itself is consistent with the bell test results. It shouldn’t be hard to publish the predictions beforehand then compare it to the data.

          It seems you agree with me this has not happened, but maybe it has and neither of us has heard of it. Post-hoc models tailored to the specific results do not count for obvious reasons (of course they can include parameters like detection efficiency, etc that are also measured).

        • Actually, here are two experiments:

          1) Run the same experiment with the detectors/analyzers/whatever various distances apart. AFAIK, QM predicts this should not affect the S-statistic. Even if opposite sides of the earth is too close to detect a deviation, it would provide a lower bound.

          2) Run the same experiment using the same equipment with supposedly non-entangled particles as a control and report the results.

    • Anoneuoid –

      > And supposedly not a single lifeform has evolved to harness this ability? In the absence of an explanation for why there is no “ESP”, I find that completely implausible.

      Unless you think that he was arguing that dolphins or whales have ESP, your argument lacks coherence.

      I’m going to guess he was arguing that there’s overwhelming statistical probability that humans possess ESP.

      • And thinking about it more – he was probably not just thinking of just some vague ability to sense that there’s something going on, but the ability to communicate and understand specific concepts or thoughts or ideas as communicated by/among humans – as I’m guessing “telepathy” was conceived of at that time. So again, I’d guess “a single evolved lifeform” is a tad bit of a stretch. Unless you think that there are lifeforms that not only can vaguely perceive something via in-built Bluetooth or whatever, but can understand what humans are communicating through some telepathic sensibility.

    • Well, most sensory systems evolved prior to radio and cell phones appearing on the scene.
      More seriously, light at the earth’s surface is about half infrared, 42% visible light and 8% ultraviolet. Life forms have evolved to use this bandwidth to get around, quite a few use UV, some use IR and most others stick to the in between wavelengths because that is where the reliable energy is and is useful for the various eyes on the planet. Not to mention leaves, etc.

      • For communication you would want to use frequencies where there is the least noise though.

        I was also thinking about these ESP experiments that have someone guess what card someone else is looking at. If we replace the humans with two robots containing radio/whatever transceivers, would the experiments be able to detect the communication? You would need to know about tuning to the same frequency, sources of interference, directionality, etc or else the results would be very inconsistent.

        • ESP refers to perception that doesn’t utilizes the senses: clairvoyance, prerecognition, etc.

          Do you think robots with Bluetooth can do that?

          Reminds me of what you thought was a near certainty with COVID.

  5. Douglas Hofstadler is one of my heroes, a modern polymath, I love his written works. Unfortunately, I read GEB so long ago I don’t remember it.

    I’m fascinated by ESP, but with the term limited to perceptions beyond those available to humans and having nothing to do with Daryl Bem. Here in south Texas where I am visiting my mother, there is a shrub known as “barometer bush,” Leucophyllum frutescens, which blooms a few days before it rains. You can read confident statements about why that occurs, but actually no one really knows. There are also extensive records of reports of animals behaving strangely before earthquakes and other calamitous events, far too many to be dismissed by a genuine scientist. Maybe I am cutting Hofstadler too much slack, but if he was thinking of these sorts of phenomena, the questions were legitimate then and still persist today.

    • Many animals can sense things that we can’t; salmon can sense magnetic fields and polarized light, for example. Given their migratory life cycle, it is easy to see why these abilities should be selected for; they get used in navigation. Things don’t evolve because the are possible, they evolve because things are possible and increase fitness.

      I’ve also heard accounts I take seriously of cattle behaving strangely before earthquakes, but if they can sometimes sense when one is coming, it must be in terms of a sensory pathway that they use for other things, as well. Serious earthquakes don’t happen often enough for the ability to sense them to have selective value.

  6. It’s been a long time since I read GEB, so I have no idea what Hofstader was thinking, but there is more light to be shed on the 1970s and 1980s.

    Recall that Puthoff & Targe were doing remote viewing research at SRI starting in early 1970s:
    https://en.wikipedia.org/wiki/Remote_viewing

    GEB was published in 1979, the same year as the founding of PEAR, the Princeton Engineering Anomalies Research Lab.:
    https://en.wikipedia.org/wiki/Princeton_Engineering_Anomalies_Research_Lab

    and for decades there were many ESP-related articles in the Journal of Scientific Exploration:
    http://web.archive.org/web/20150318030748/http://www.scientificexploration.org/journal/articles.html
    The articles look sciency, but of course, would be unlikely to get published in Science or Nature :-)
    In 2005, Jahn declared victory and shut down PEAR:
    http://web.archive.org/web/20150424111522/http://www.scientificexploration.org/journal/jse_19_2_jahn.pdf

    Of course, if one searched Committee for Skeptical Inquiry’ (CSI)’s Skeptical Inquirer archives, ESP beliefs have persisted surprisingly long, in part because TV shows resurrect them:
    https://skepticalinquirer.org/s/?_sf_s=esp

    (Disclosure: I’m a CSI Scientific/Technical Consultant).

  7. I’m happy to say that my alma mater, The University of Edinburgh, has the Koestler Parapsychology Unit! They list two staff members, one of whose interests are listed as “Replication and methodological issues in parapsychology, with a particular interest in ESP research using the ganzfeld method.”

    As far as reading Gödel, Escher, Bach post grad school, who cares? I read it when it came out and I was in high school and it changed my life. Time for a full post.

  8. > Back in the 1970s it was considered sensible to take ESP seriously, even if you didn’t believe it. Now you don’t have to show ESP that kind of respect.

    But the general public still commonly believe in ESP, don’t they? I’m not really sure how to interpret this.

  9. That local realism is dead is a mathematical certainty. If you can’t follow Bell’s argument, I’m afraid I can’t help you much there, you’ll have to take an introductory class on probability theory because that’s really the only thing the the derivation uses.

    Yet, the oil droplet model shows you can get a bell violation from a system of entirely local interactions. We can redefine “locality” if you want so that the emergent behavior of the system is considered “non-local”, but that is not what was originally meant.

      • They’ve proposed a specific mechanism (because non-markovian dynamics) for why this correlation exists. So dealing with these loopholes isn’t going to go the same way as the usual bell tests (because QM is spooky).

        Moving the droplets/analyzers farther apart pushes the shared past further back, but they will still have been influenced by the same events and each other. For the droplet model, I’d expect a decaying influence with distance. I’m not sure if the people running these studies would agree though.

  10. To understand what Hofstadter meant, see Chapter 5 of Metamagical Themas, “World Views in Collision”.

    Hofstadter had no doubt that ESP does not exist. He has been on the editorial board of The Skeptical Inquirer for a long time.

Leave a Reply

Your email address will not be published. Required fields are marked *