The point about the invalidity of inferring the failure of the conditional probability rule by ‘conditioning’ on different experimental setups has been made before here. Many times, apparently. However, the general situation in QT is not quite as simple as “the rule is valid in QT” (see the discussion of “quantum conditional states” in the linked article).

]]>P.G.L. Porta Mana

https://arxiv.org/abs/2007.08160 ]]>

Cox’s theorem yields only the laws of probability. You can then start assigning (diagonal) density matrices.

I just don’t understand this sentence. First observation: I can always diagonalize a density matrix. It sounds like you imply that a diagonal density matrix is classical, but that depends on the observables. Maybe you mean in the basis that diagonalizes observables too? I’ll assume the latter.

Then: if I have understood correctly, without all the mathematical quirks, by (A, phi) you mean something equivalent to A = set of projectors, phi = density matrix. Then maybe what you are meaning is: Cox’s theorem starts from propositions, but does not implement them, in general a set of propositions need not be decomposable into a set of atomic propositions, so I can implement propositions with projections and phi the probability map? If it is so, I’d like to see the implementation, what is P and Q, P or Q.

]]>What I was thinking up to now is: quantum probability = the density matrix represents your state of knowledge, while classical probability = a probability distribution represents your state of knowledge.

That’s the basic idea. Remember that QP includes CP [algebras of functions]! You can find definitions in many of the references I’ve already given. Probably the best thing for you to do is peruse the list in the references section on the nLab page and find the one(s) most suitable for you given your background knowledge.

Uhm, ok, but from Cox’s theorem I obtain probability distributions, not density matrices, right? How do you make the link?

Cox’s theorem yields only *the laws of probability*. You can *then* start assigning (diagonal) density matrices.

No: Holevo defines a statistical model, which isn’t the same thing as a (quantum) probability space. But a statistical model can be extracted from one: take all the states in A to make S and make M out of [commutative, hermitean] subalgebras of A).

Ok, although the name is different I thought that was what you meant by generalized probability, please state your definition, because this means I still have not understood precisely what you intend by quantum probability. (You sentence sounds to my hears like “no, it is not that, but yes, it is that (??)”.) (What I was thinking up to now is: quantum probability = the density matrix represents your state of knowledge, while classical probability = a probability distribution represents your state of knowledge.)

I thought we were on the same page – Bayesian – philosophically (I don’t think the flavour – Objective or Subjective or some mixture – matters). All mathematical models of <a probability in that sense – classical, algebraic/quantum, etc. – are models of the same concept. So you can’t order them by fundamentality but you can order them by generality.

Uhm, ok, but from Cox’s theorem I obtain probability distributions, not density matrices, right? How do you make the link?

]]>the point is not doing the calculation in the complicated way, I agree e.g. the tomographer should use the density matrix; rather it is philosophical, it is what I put down as fundamental. Like, the von Neumann construction of the integers is arguably more complicated to use than the Peano axioms, but you get the advantage of being founded on set theory, then you probably never use it again.

I thought we were on the same page – Bayesian – philosophically (I don’t think the flavour – Objective or Subjective or some mixture – matters). All mathematical models of <a probability in that sense – classical, algebraic/quantum, etc. – are models of the same concept. So you can’t order them by fundamentality but you can order them by generality.

So, generalized probability is made up of states, and measurements that yield probability to be used in the usual sense out of the states… How can this be fundamental respect to probability, since the latter is used in defining it?

No: Holevo defines a statistical model, which isn’t the same thing as a (quantum) probability space. But a statistical model can be extracted from one: take all the states in A to make S and make M out of [commutative, hermitean] subalgebras of A).

[…] I want a clear explanation!

Yes, I’m not doing very well, but have we made any progress – in particular on the “fundamentalness” issue?

]]>P.S. Maybe I’m asking too much. Maybe a “clear” explanation is related to the problem of the interpretations of quantum mechanics, in the sense that I always end up with classical probabilities because in some way classicality emerges from quantum mechanics and humans behave classically.

]]>I’m afraid I can’t see how it simplifies anything. It’s augmenting a quantum probability space (Α, ϕ) with a classical probability space (Ω, Σ, μ) constructed on the boundary of Α. But μ just amounts to an extra, redundancy-laden set of coordinates for the QP state ϕ. I can’t see any use for it even for the tomographer of your example. He does only know Α and so must estimate ϕ, but that’s just a matter of estimating 3 numbers – the usual sort of coordinates for ϕ – directly.

You continue insisting (and emphasizing) on the mathematical complication of the construction. I already said many times in various ways that the point is not doing the calculation in the complicated way, I agree e.g. the tomographer should use the density matrix; rather it is philosophical, it is what I put down as fundamental. Like, the von Neumann construction of the integers is arguably more complicated to use than the Peano axioms, but you get the advantage of being founded on set theory, then you probably never use it again.

I’ll quote the definition I found in Holevo’s book:

1.3. Definition of a statistical model

Motivated by consideration in Section 1.1, we define a statistical model as a pair (S,M) where S is a convex set and M is a class of affine maps of S into the collections of **probability distributions on some measurable spaces U**. The elements of S are called states, and the elements of M measurements. The problem of theoretical description of an object or a phenomenon satisfying the statistical postulate can then be described as a problem of construction of an appropriate statistical model. In more detail, the construction must first give a mathematical description of the set S of theoretical states and the set M of theoretical measurements and second, prescribe the rules for correspondence between the real procedures of preparation and measurement and the theoretical objects, i.e., an injection of the experimental data into the statistical model.

So, generalized probability is made up of states, and measurements that yield probability to be used in the usual sense out of the states… How can this be fundamental respect to probability, since the latter is used in defining it?

Also, it sounds a lot like the state is the parametrization of some distributions. It really sounds like the normal definition of a statistical model. What is generalized here? The generalization is more about the classical vs. nonclassical description of reality, but from the statistics perspective you just care that there are parameters and distributions. Why stuck your physics into the definition of probability? What if your physics change and the elegance of the density matrix crumbles?

The QP space (Α, ϕ) above denotes, in general, a [possibly noncommutative] algebra of “random variables” and [one of the] “states” of its dual. (The algebra Α is M₂ for the specific case of your spin-1/2 tomography example.)

Just putting down the definition of the mathematical object is not sufficient here. You have to connect it to reality, the definition above connects it to reality through the preexisting probability concept. (Disclaimer: I don’t remember what an algebra is in general, and I don’t know what is phi, what is M_2)

I want an empirical explanation that introduces the density matrix without standing on the preexisting probability (and which is not frequentist in the bad sense). Otherwise I can accept that I can see every probability as just a special case of a generalized probability, but not that the latter is the fundamental concept.

I’m not really against quantum probability. As you said at the very beginning of the discussion, it simplifies thinking about quantum mechanics. But I want a clear explanation!

]]>Is it false or is it just math?

You cannot have the cake and it it too ;-)

]]>Carlos:

Thanks much. This will be useful for us in preparing our revision.

]]>Carlos:

As a person who uses probability models all the time, I find it useful to know that probability is actually false, in the sense described in the paper, that there are problems for which the usual application of conditional probability will not work.

]]>One could turn the argument on its head and say that the paper seems written by people who are less interested in making a point about the dangers of building a probability model by intuition and more interested in demonstrating that they once took a course in quantum mechanics. :-)

It’s not very clear to me what’s the point. The two main difficulties that quantum physics creates for statistical inference according to the paper seem to be that a) quantum superposition is not Bayesian probabilistic uncertainty and b) the authors have no idea whether nodes, wave behavior, entanglement, and other quantum phenomena could manifest in observable ways in applied statistics, motivating models that go beyond classical Boltzmann probabilities. Both points are true, I guess. Maybe there would be less objections if the paper was titled “musings on” rather than “holes in” Bayesian statistics.

I find the discussion about physics confusing. For example I would say that |1>, |2> and |1>+|2> are states, not outcomes; the inequality {x=1 or x=2} =/= {x=|1>+|2>} is completely inconsistent. Another thing I don’t understand; “the awkwardness that x can be measured for some conditions of the number of open slits but not others.” I would say that x can be measured whenever we want, we just have to make a measurement.

I also find the discussion about physics distracting, if the true objective is to discuss statistical issues. Maybe others find the discussion useful, but it made me think about the following quote: “The second law, when formulated in terms of entropy, gives powerful insights into a wide variety of problems. It seems a shame to take such a beautiful and exact idea and blur its meaning by indiscriminately applying it to all sorts of areas that have nothing to do with equilibrium thermodynamics. Such a procedure might disorder our thoughts about other disciplines that are already difficult enough.”

]]>“you could able to come up”

“the approach can makes sense”

“will have essentially no inference on the parameters”

“also makes a claims”

“we can take consider this is as a prior”

“the actual result with notes”

“a modification to a proper by weak prior”

“but rather as a motivation to better understand than then to improve modeling and inferential workflow”

The liberal use of commas is mostly a matter of style, but some stand out:

“could more formally write, p(…)”

“the default linear mixture (0.5, p(…) + 0.5 p(…))”

This fragment sounds kind of strange to me, but maybe it’s intentional:

“… to simply insist on realistic priors priors for all parameters in our Bayesian inferences. Realistically it is not possible to capture all features of reality…”

]]>Since there is the construction by which you can always see a quantum probability as classical probability with higher dimensional probability space, I don’t agree that I need the general probability as fundamental. I can pick! And in these cases it is normal to pick the simpler one, like, a simpler rule over more objects.

I’m afraid I can’t see how it simplifies anything. It’s augmenting a quantum probability space (Α, ϕ) with a classical probability space (Ω, Σ, μ) constructed on the boundary of Α. But μ just amounts to an extra, redundancy-laden set of coordinates for the QP state ϕ. I can’t see any use for it even for the tomographer of your example. He does only know Α and so must estimate ϕ, but that’s just a matter of estimating 3 numbers – the usual sort of coordinates for ϕ – directly.

I feel there must be a way to put down generalized probability all at once, so I don’t rely on a preexisting probability concept

The QP space (Α, ϕ) above denotes, in general, a [possibly noncommutative] algebra of “random variables” and [one of the] “states” of its dual. (The algebra Α is M₂ for the specific case of your spin-1/2 tomography example.)

]]>> Or you can make a machine that doesn’t record which pure state is a good description of each system it prepares, or you or someone else can inspect the internal construction of your machine closely enough to determine a mixed state description for each system it prepares.

Yes, I agree my example is built ad hoc, it’s a specific example where it is easier to use classical probability and the phase space construction of the density matrix.

> > If I think quantum, then in each passage I have to use quantum probability […]

> > > Let me re-emphasise that it contains classical probability and of course you’ll want to keep on using the conceptually and technically simpler Kolmogorovian representation of that part of it wherever it’s appropriate.

> > In other words, the point that the phase space description adds unnecessary complication is ambiguous. Because the density matrix generalizes a distribution, but can also be derived from a distribution.

> It isn’t ambiguous; it’s a plain mathematical fact. A quantum probability state [“density matrix”] can’t be derived from a classical probability distribution. Your quantum tomographer is estimating one from the data he/she collects and the knowledge that it’s appropriate to do that rather than estimate a classical probability distribution [over some ‘hidden’ variables].

Ack, I don’t know what you read in “derived”, I meant that there is the phase space description. In the thought experiment the guy maneuvering the machine derives the density matrix from a probability over the spin states. The other guy can’t ever see the full probability with his measuring apparatus, but he can still think there’s one if he wants, in fact it turns out at the end he gains access to it.

> You need the general notion of probability – that is what’s fundamental – and “general probability ⊃ quantum probability ⊃ classical probability”. Surely the way out for a Bayesian, acutely aware of the distinction between a probability and a frequency, is to not use a thought experiment in which both are involved in such a diabolical, frequentist manner? ;-)

(Uhm, whatever I do must make sense in any concrete example. But you are half-joking here.)

Since there is the construction by which you can always see a quantum probability as classical probability with higher dimensional probability space, I don’t agree that I need the general probability as fundamental. I can pick! And in these cases it is normal to pick the simpler one, like, a simpler rule over more objects.

It is like saying that the fundamental notion of Markov chain is the one with 1 step memory. You can see it as a special case of the one with n steps memory and time dependence, but the latter can again be seen as a special case of 1 step on a (n+1)-dimensional vector. But if you happen to work a lot with n steps memory chains, maybe you work more easily by using as fundamental the other choice, so it is an arbitrary choice after all based on convenience.

Another weaker analogy is with gaussian processes, where I build an infinite dimensional probability space, but when I have the data points and the points where I want the prediction, I just need to compute the covariance matrix on the points. But sometimes it is convenient to decompose back the problem in a basis of functions.

The last example I made and called “recursion paradox” is relevant, you won’t get away with a joke! As I read in Holevo’s book: the generalized probability has a “state space” and “measurements”. The measurements produce probabilities out of the state space. Wait: so what are these probabilities? Holevo introduces them in a totally frequentist manner, but that has the usual problems. Anyway, it appears you need some preexisting notion of classical probability that stands under the definition of generalized probability. Then you find out generalized probability can be seen as usual probability over a larger space. So, uhm, it is the classical which is fundamental, because you need it in order to define the generalized one, and the generalized one actually is still a special case of it!

Maybe it is Holevo’s treatment which is half-assed in this respect. I feel there must be a way to put down generalized probability all at once, so I don’t rely on a preexisting probability concept, and the choice of what is fundamental becomes totally free again.

It is not literally what you said, but yes frequentists are diabolical creatures.

]]>Example: I can prepare spins in pure states. I can make a machine that, given a pdf on pure states of spin, generate spins according to that distribution at a given rate, and I can send the spins one at a time and record in which state each of them was.

Or you can make a machine that doesn’t record which pure state is a good description of each system it prepares, or you or someone else can inspect the internal construction of your machine closely enough to determine a mixed state description for each system it prepares.

If I think quantum, then in each passage I have to use quantum probability […]

Let me re-emphasise that

it containsclassical probability and of course you’ll want to keep on using the conceptually and technically simpler Kolmogorovian representation of that part of it wherever it’s appropriate.

In other words, the point that the phase space description adds unnecessary complication is ambiguous. Because the density matrix generalizes a distribution, but can also be derived from a distribution.

It isn’t ambiguous; it’s a plain mathematical fact. A quantum probability state [“density matrix”] can’t be *derived* from a classical probability distribution. Your quantum tomographer is *estimating* one from the data he/she collects *and* the knowledge that it’s appropriate to do that rather than estimate a classical probability distribution [over some ‘hidden’ variables].

So I need a classical notion of probability to use a quantum probability. This selects classical probability as fundamental. As usual, not really sure, I feel there’s a way out from this “recursion paradox”.

You need the general notion of probability – that is what’s fundamental – and “general probability ⊃ quantum probability ⊃ classical probability”. Surely the way out for a Bayesian, acutely aware of the distinction between a probability and a frequency, is to not use a thought experiment in which both are involved in such a diabolical, frequentist manner? ;-)

]]>Ok, I said I conceded victory in this case, it turns out I was lying, because it is Saturday and I took the time to fully read the first chapter of Holevo’s book.

So, yes that construction is exactly what you mean, I got confused with purification because I thought infinite-dimensional was referred to the Hilbert space instead of to the probability space over the Hilbert space.

I can accept that the construction is needless in the sense that, given a density matrix, I don’t need to decompose it in that way to do subsequent calculations, but it is not needless in general.

Example: I can prepare spins in pure states. I can make a machine that, given a pdf on pure states of spin, generate spins according to that distribution at a given rate, and I can send the spins one at a time and record in which state each of them was.

From the point of view of someone who does not know how I’m doing this, he can do quantum tomography of the spins coming out and measure the density matrix with arbitrary precision, but no more than the density matrix—he would not be able to get information on which pure state each spin was. In the end, he gets a probability distribution on the density matrix.

But, meanwhile, I know the state of each spin, and I can give him the record, and he will know too then.

So I can physically build a density matrix in a classical probability sense. Each passage is described with classical probability: the fact that I know the spin states, the statistical analysis done to do quantum tomography, possibly the poissonian rate of the spins, the fact that the guy before is ignorant and then knows the individual states.

If I think quantum, then in each passage I have to use quantum probability. I would have a density operator for the entries of the density matrix! If I do not assume commuting observables and make inference on that, I can explicitly measure small entanglement between me and the guy, or between rho_11 and rho_22, or whatelse.

In other words, the point that the phase space description adds unnecessary complication is ambiguous. Because the density matrix generalizes a distribution, but can also be derived from a distribution.

To the critique of unwanted complication, the quantum statistician can reply “but you got complication because you didn’t assume commuting observables when you could”; conversely, the classical statistician says “you got complication because you took the space of all possible probabilities, you never do that, you always have a model!”

Also: at the end, quantum probability spits out probabilities, in the sense of real numbers related to frequencies of repeated experiments. So, either we go back to the unsatisfactory frequentist paradigm where you always have to say “hafta do enough experiments, so we don’t get fooled by random variations” (THEN WHAT WAS RANDOMNESS AGAIN??) or “assume the system is chaotic, i.e. you really get frequency = probability” (AAAGH) (half-mocking Holevo here), or we have to do something with these numbers. I’m bayesian, so I think ok, these are degrees of belief or plausibility or whatever calibrated on frequencies, i.e. probability, but then wasn’t I a quantum probabilist? Why am I always extracting this special “classical” thing at the end?

So I need a classical notion of probability to use a quantum probability. This selects classical probability as fundamental. As usual, not really sure, I feel there’s a way out from this “recursion paradox”.

]]>Steven:

One of the key problems for me in QM is that there is usually no acknowledgement of the difference in meaning between probability: a measure of information about the state of some particular thing, and probability: the frequency with which things happen in repetition.

Let me give you an example. We both close our eyes and you throw a coin in the air. We hear it land on the ground. How can we describe our state of information about whether it is heads? Namely as p=0.5 heads and p=0.5 tails. Now, we open our eyes. Magically “the wave function collapses” and suddenly in seeing that it is heads, we now have p=1.0 heads and p=0.0 tails. If we think of this probability as a property *of the coin* then it seems that there is a special role for the observer in modifying or making this property come into being… collapsing the wave function so to speak happens “because of measurement”.

If in general we mix the ideas of frequency and probability willy nilly, we can wind up failing to understand why mysterious things seem to happen, like this. I just ask that we keep a *sharp bright line* between frequency and probability, and form a QM theory that keeps that bright line always in focus.

There is no mystery in the heads/tails calculation provided that you recognize that you are measuring your information about the state of the coin, not the actual state of the coin.

Similar issues arise in QM. When we perform a double-slit experiment and see a flash at a particular spot, at the moment we see this flash, our uncertainty about the location of the “particle” disappears. It is *right there* with probability 1. On the other hand, if there is an actual quantum wave moving through space, or through some high dimensional configuration space, this collapsing of *our* probability assignment for *this particular* particle has no logical consequences for the quantum wave’s shape or extent.

If we keep these things in mind, we will make better progress on QM is my assertion.

This is what “statistics” (or rather probability) have to do with QM.

]]>If the point of interpretation is to create an intuitive picture of microscopic reality, it is pointless, merely a naive attempt to salvage current beliefs. But then, this is especially true of Copenhagen, which assumes convention, then salvages this by denying science aims to describe reality, at all. Radical epistemological skepticism is reactionary, like most physicists and other academics, so far as I can tell. (No, being urbane, tolerant, generous etc. may be the idealized self-image of liberals. But social manners are not politics. Liberalism is not leftism, either.) If there is a generally agreed upon basic understanding of how QM/QFT lead to spacetime, or a DeSitter universe, I’m not aware of it. The greatest trend now is study of a universe we don’t live in, an anti-DeSitter universe.

The point of interpretation is to help us sketch out how, for instance, a quantum object such as the primeval universe, connects to our reality. Much of the critique of Copenhagen no more addresses these issues than Copenhagen does. Another way of saying this is to point out that the problem of how to reconcile QM/QFT with General Relativity has been “resolved” by the determination that, somehow, GR is wrong. The real issue then should be interpreting General Relativity, how to revise its foundations. Even people who critique some aspect of fundamental physics, agree that GR is wrong. The real question, then, is: “How is GR wrong?” Thus far, the answer is, it contradicts QM/QFT.

I do not see how statistics is relevant to these issues.

]]>Keith:

This Peirce quote is super-relevant to a post that I just wrote and is scheduled to appear in Aug. Could you give me the full citation? Thanks.

]]>As CS Peirce once quipped “you don’t see a rose but rather hypothesis you are seeing one and don’t come to doubt it.

]]>> But with respect to what we think will repeatedly happen in future, it is informed by both the abstract object and the empirical summary

Exactly. But beyond that, many times what we want to know is not actually a thing we can measure. We can only measure some consequence of that thing. Like for example how far away is a lightning strike? The thing we can measure is the position of light, and the position of sound, and the position of the second hand on our watch (in fact, we can only every measure the positions of anything, no other quantities are possible to directly measure).

We can *infer* the distance to the lightning strike because we have a model for what will repeatedly happen when lightning strikes the earth, namely that light will propagate outwards at the speed of light, which is a constant, and sound will propagate outwards at a speed which varies from place to place and time to time, but is basically always in a narrowish range…

In other words, we need inference even to make sense of what people call measurement. I think this is something neglected in QM as well. Measurement is always the position of things. There is no such thing as measuring momentum, or measuring spin, or measuring energy.

]]>Now, I think why we were disagreeing about something we might actually agree on.

“what we [think we know] know, measured as a probability” is an abstract object (math) taken for now to beyond doubt – a representation.

“what happened, measured as a frequency” is a distributional summary of what (we think) happened empirically.

But with respect to what we think will repeatedly happen in future, it is informed by both the abstract object and the empirical summary – a reasoned argument for what to expect but expressed as a possibility (math).

So in terms of Peirce’s 3 categories (yawn), possibility, actuality and expectation (based on reasoned argument of possibility and actuality) which becomes the new possibility.

That might be, has been and expected to be.

So thanks.

]]>For my sanity, could you please make an argument yourself? Something I can be sure I understand, grounded on reality, without taking the “Paul Hayes” exam instead of the ones I have to actually take… You can quote at the end if you mention theorems, but only the proof can be external, the statement must be clear. Otherwise I never know if what I’m saying makes sense, or if I’m not understanding anything at all.

Well I’m not sure. For example, it’s easy to see* why that earlier construction is the (needless) introduction of a ∞-dimensional classical probability state space on top of [the “extreme boundary” of] a 3-dimensional quantum probability state space. But if you don’t already see it, it’s probably simply because you’re not familiar with the relevant elements of QP. Probably I should start there but it would be a bit of a chore in this context.

* I mean that quite literally: just by looking at the Bloch sphere/ball – an object which I expect you’re already familiar with (from a ‘vanilla’ QM perspective at least).

]]>When I say it was political, I meant in the period 1920 to say 1980 or so. I have no knowledge of current sociological state but I think it likely that all the social issues are largely dissipated and that foundations today are a less socially risky area.

]]>To some extent I agree with somebody. Personally, what fired my guts was «the usual rules of conditional probability fail in the quantum realm», and now I feel the same at Andrew saying «What statisticians call “probability theory” is what physicists call “Boltzmann statistics” or “hidden-variable models.” These models are not in general true, in the sense that they do not apply in quantum mechanics. In mathematics we say that a conjecture is false if there are any counterexamples to it. In that sense, probability theory is false.»

It’s saying so starkly “probability is false because quantum” that feels wrong, because there are various interpretations around, none is preferred really at the moment by evidence, so anyone with a different opinion feels excluded.

Then, at the end of the article, one thinks “whatever, they acknowledge all this, it was just an effect statement, the point is a strong example of misspecified model, and acknowledging probability of course is not a proven definitive form of knowledge”.

It would seem appropriate to state something like “the usual rules of probability fail in the quantum realm if applied naively”, or “quantum mechanics can be more concisely described by quantum probability, so it may be the right way to do probability”.

But, on the meta-level, what is the right amount of starkness really? Maybe it is good to make strong statements, if they do not make people do actual bad things. If someone comes to me and says “wrong because quantum because Gelman’s said that” I will send quantum anathema to all of Columbia university! Just kidding, I miss the necessary papers from the church.

]]>Man, that summary of Cox’s theorem is possibly the single most useful thing I gained by following this comment thread!

]]>The point with augmenting the dimensionality, isn’t it mentioned in your very quote of Holevo’s book? (I also went and read that part of the book and part of the addendum before my previous answer)

Now, I think I’m missing something here. I’m convinced that, mathematically, at the end, I do exactly the same calculations on the density matrix when applying quantum mechanics, whatever is my ontology. So, I understand what it means there will not be a return to classical reality, but I don’t need to change probability for that!

You continue to point out problems in my meta-theoretical choice, but I can’t say I fully understand them. Each time you link a paper or book, I try to read everything, but they are too many and too long and technical to understand everything, and I have to extract the information from the context, and each time I think I found a loophole. But I can’t be sure, at best I understood only a minimal part of all your references!

For my sanity, could you please make an argument yourself? Something I can be sure I understand, grounded on reality, without taking the “Paul Hayes” exam instead of the ones I have to actually take… You can quote at the end if you mention theorems, but only the proof can be external, the statement must be clear. Otherwise I never know if what I’m saying makes sense, or if I’m not understanding anything at all.

I understand if this is a weird request, it always takes more time when I have to do that kind of self-contained explanation myself to someone who knows less. In case you don’t have enough time to cope with my ignorance, I concede a technical victory to quantum probability!

]]>Smolin gives an example of how QM foundations *is* political in his quote of Leon Rosenfeld, a close Bohr collaborator, responding to Bohm:

Yes, and Carroll tells anecdotes about Hugh Everett’s “persecution” by the “establishment”. They may even be true. In the end it’s what these people are [not] saying and writing about the actual content of *modern* QM foundations that’s of most concern.

By “metaphysical preferences” I just meant e.g. choosing between accepting nonlocality or not where neither choice has been ruled out.

Is it a political position to simply write textbooks asserting that no other ideas about the world are possible which has been proven by Bell? Even when Bell himself published a whole book about the fact that Bells inequalities don’t mean that hidden variables are impossible? I think yes. That is, this is a choice, a choice to shun alternative possibilities.

Erk! General QM textbooks aimed at undergraduates often aren’t good on foundational matters. I see the preface advertises it as a book intended to teach how to *do* QM, leaving the “quasi-philosophical” stuff to the end. A remark which leads me to expect errors there! Let’s restore ‘political’ balance with a textbook which advertises itself as a foundations book but which, among other flaws, has Bell ruling out locality.

#despair

]]>Addition is just a mathematical concept encoded in the Peano axioms, but 1 meter + 1 meter means a different thing than 1 kg + 1 kg

Knowing what you mean by “the probability of X is” can only be figured out if you know what was meant by the person who did the calculation.

]]>Kyle, that’s what it seems like to me. Bohr et al seem to have attempted to redefine *what it means to do physics* so that questions of the type Bohm and Bell asked, such as “what is an electron?” and “does it have a particular location before it hits your detector?” were now *no longer physical questions* but rather some kind of unseemly philosophical question for poseurs. Therefore those asking such questions were no longer physicists, or at least no longer wearing their physics hat, and could be safely ignored.

The only “physical” questions allowed by Bohr’s group were “what would be the frequency that light will flash at point x on my screen” or “what would be the frequency with which I would detect a photon if I made my polarizer have this certain angle” etc. They seem to have claimed a non-instrumentalist view, but I can’t make heads or tails of what that would be, it seems like frequency of instrumental outcomes is the only thing they really admit as real.

My own background is doing lots of mathematical modeling in which I build models where an unknown variable is *the most physical thing* and the observations are essentially irrelevant except that they give me information about the physical thing. For example, if I have a video of you doing a cartwheel while wearing reflective dots in a motion capture room, and I want to infer where your center of mass is and what its velocity is at time t, I don’t care how often in repetition you would have dots in a particular position, I want to know given where the dots were at time t where was your CM… and I will always wind up with a Bayesian posterior distribution over the location of this CM.

The fact is relatively few physicists even seem to know what Bayes is about, and for the most part I would guess that they would almost universally confuse the frequency with which a particle goes through a given slit with the probability that it did in a given case.

This attempt to redefine what was “allowed” vs what was “not physics” is at the root of why I say things were political (for example attempting to control who could be considered “in the group vs out”)

I’m not either an expert historian or an expert physicist but this is the take-away that I have come up with from a moderate amount of reading (I’ve read Bell’s book, Bohms book, and some of the textbooks by both Griffiths, and Eisberg and Resnick, and most of the Feynman Lectures, all three volumes. I also took a basic undergrad course in intro QM, and a grad level course on stat-mech with a section on quantum statistical mechanics. This makes me not even close to competent in actually doing QM but I know something about what the issues are and how they’ve been described by different authors)

]]>The first thing we have to do to make progress in QM in my opinion is to recognize the difference between the frequency of events in an ensemble, and the probability that a given single event will or has occurred.

Simply stop using the term “probability” for wavefunction(x,t)^2 and say “frequency in replications of this experiment”. This will help because for example we can then make sense of the following experiment and the calculations needed to answer the question:

suppose someone fires a photon at a two slit apparatus which has a screen in front of one of the slits that oscillates back and forth chaotically fed by amplified noise from a radio receiver tuned to static background… so that the second slit is either open or closed, but without a recording of the fact, we have no way to know which. Nevertheless in 50% of repetitions it will be open.

We set up a far screen at the other side of the apparatus to collect photons and show a flash of light.

When the light flashes on the far screen we also have a detector that beeps to know when the experiment has finished. A camera captures a photograph that shows us where the flash was once we’ve observed the photograph. There is also a camera that observes the second slit and can tell us whether it was in the open or closed position once we’ve observed that photograph.

Assuming this background, write down the probability

p(flash at X | Beep)

Write down the probability

p(photon went through the first slit | Beep)

Write down the probability

p(second slit was open | flash at X, Beep)

Write down the probability

p(second slit was open | flash at X, photo of second slit, Beep)

Write down the probability

p(photon went through the first slit | flash at X, photo of second slit, Beep)

Now, there is exactly 1 replication of this experiment. Let F be the frequency with which a given thing occurred in this single replication:

Write down F(flash at X | Beep) (hint it is either 1 or 0 but we don’t know which, we haven’t looked at the photo of the screen yet. Therefore it is not a well known number, though it is in fact a well defined number)

Write down F(flash at X | Beep, Photo of screen) (hint, we know this number exactly once we see the photo of the screen)

Write down F(second slit was open | apparatus) (hint, it’s either 1 or 0 but we don’t know which)

Write down F(second slit was open | flash at X, apparatus) (again, it’s either 1 or 0 but we don’t know which)

Write down F(second slit was open | flash at X, photo of slit) (now we know it’s either 1 or 0

Write down F(photon went through first slit | flash at X, photo of slit) (depends on whether the photo shows the second slit was open or closed)

I think you can see that this wedge we drive between what we know, measured as a probability, and what happened, measured as a frequency, is a critical distinction needed to make progress, until we make this distinction, QM will continue to pun on the two meanings of probability just like statistics has punned on the two meanings of “significant”

]]>This sounds like how I have read that Chomsky operated in linguistics: defining any questions unanswerable by his schema as uninteresting and metaphysical. True, Bob Carpenter?

]]>To me, different interpretations of QM are not even directly relevant. Andrew argued earlier by in his blog that “Probability is a mathematical concept encoded in the Kolmogorov axioms. That’s it. No need to argue over a canonical interpretation. It’s just math!”

]]>Smolin gives an example of how QM foundations *is* political in his quote of Leon Rosenfeld, a close Bohr collaborator, responding to Bohm:

“I certainly shall not enter into any controversy with you or anybody else on the subject of complementarity, for the simple reason that there is not the slightest controversial point about it…

The difficulty of access to complementarity which you mention is the result of the essentially metaphysical attitude which is inculcated to most people from their very childhood by the dominating influence of religion or idealistic philosophy on education. The remedy for this situation is surely not to avoid the issue but to shed off this metaphysics and learn to look at things dialectically.” (Smolin, pg ~100 electronic version).

It’s hard to see how this is anything but a “You’re not Marxist enough to understand the true meaning of QM”. From the other end of course came the “Bohm’s too Marxist to be allowed into US Universities” thanks to Joseph McCarthy. So to imagine that politics played no role in theoretical physics between 1920 and say 1990 is just too wide of the mark.

If you read Griffiths book which is one of the most common undergrad / beginning grad textbooks on QM you will find him advocating the idea that Bell proved that there are no hidden variable theories on page 6, and asserting that the “orthodox” opinion due to Bohr is the only possibility. He also claims basically that the collapse of the wave function is just a fact.

an actual quote: “for now, suffice it to say that the experiments have decisively confirmed the orthodox interpretation. a particle simply does not have a precise position prior to measurement…”

(you can use “look inside” on amazon to see these pages https://www.amazon.com/Introduction-Quantum-Mechanics-David-Griffiths/dp/1107189632)

Is it a political position to simply write textbooks asserting that no other ideas about the world are possible which has been proven by Bell? Even when Bell himself published a whole book about the fact that Bells inequalities don’t mean that hidden variables are impossible? I think yes. That is, this is a choice, a choice to shun alternative possibilities.

I don’t know enough about QM to know how to proceed from any of the other theories to develop them further. It’s not within my power. What I do recognize is so long as we publish widely read textbooks claiming any other interpretation is mathematically impossible or completely disproven by experiment… we will not encourage any investigation of this question.

]]>A little bit of an aside here. The book I recommended to Daniel Lakeland above – Einstein’s Unfinished Revolution by Lee Smolin – is actually quite level and generous in consideration of all the existing interpretations of QM. I actually came away from that book still fairly sympathetic to Bohr, and more confirmed in my broadly instrumentalist take on philosophy of science, despite Smolin’s impassioned and cogent arguments on behalf of “realism”. Although, I really enjoyed the principles and new directions that Lee advocates, and have enjoyed thinking about them in relation to fields of study that are more in my wheelhouse…

]]>Bohmian interpretations are rejected not because they give the wrong answer, but because they give the right answer for reasons physicists prefer to reject (ie. it’s a political issue, they don’t like the nonlocality, and apparent faster-than-light effects […])

They’re (informed) metaphysical preferences, not political. In the light of developments in quantum foundations (including QFT) it’s hard to see why anyone would be spooked into accepting nonlocality / re-accepting action at a distance.

a split between “quantum stuff” and “classical stuff” which is built into many other interpretations.

Many? Either way, if you know of any interpreters who’ve got that split built in to their interpretation, and they’ve justified it by an appeal to Bohr, you can refer them to this.

Basically saying that “normal probability theory doesn’t apply” to QM is perpetuation of a political position of Bohr’s. Bohr and co browbeat the QM community for decades to try to eliminate all ideological opponents in a typically 20’th century cold war.

As a political scientist you might find that history interesting, and how much of what’s “weird” about QM is really about political power struggles within academia.

Perhaps a political scientist could explain why demonisations of Bohr along with (sometimes tacit, possibly unwitting) dismissals of much of modern quantum foundations* seem to have become popular with some proponents of some QM interpretations / alternatives. Recently Sean Carroll has put the boot in too, but on the MWI side (usually it’s the Bohmians).

* This is by far the worst aspect of this sort of thing of course. Fuchs rightly laments it in his review of Becker:

]]>From this point of view, the tools and concepts of quantum information are what were needed to make sense of the deeper elements of the Copenhagen interpretation all along. Yet, this is an interpretative route hardly mentioned in Becker’s book, though it now commands a significant portion of quantum foundations research worldwide. […] I suspect it would be difficult for pursuers of hidden-variable theories, spontaneous collapse models, and many-worlds interpretations to exhibit a similar quantity of research activity in their own fields.

What I agree with is the following: if your model is that y is independent and identically distributed across N experiments, and in fact it’s independent but NOT identically distributed, then you will find that Bayes gives the wrong answer. It’s just so routine that say running a mouse experiment doing surgeries to see if skin regrows after a certain kind of injury doesn’t have outcomes that depend on whether you left the window cracked open on your car when you parked it down in the parking lot below the lab building.

QM is just a scenario where nonlocal state of a system such as whether a slit is open in the “parking lot” *does* directly influence the outcomes of the experiments… But it’s not the only one. Feynman’s example from the cargo cult speech of the guy who was forced to put his rat maze in a bed of sand to keep the rats from “hearing” the sound that their footsteps made when they got near the spot with the reward is a good example. Seemingly irrelevant things like whether the maze was on a table top or a bed of sand actually change the distribution of the outcomes.

]]>Daniel:

After we get the reviews back from the journal, I will try to rewrite that section in light of the comments on this post. The point of that example is that a model that seems natural, based on intuitive ideas of physics, does not work. Bayesian inference really does fail in that if you estimate p(x) from one experiment, p(y|x) from two other experiments, and p(y) from a fourth experiment, that these distributions are not consistent with each other. Now, sure, you can (correctly) point out that the laws of quantum physics tell us that we cannot estimate these probabilities from four separate experiments, and one can put this in a Bayesian framework by requiring that we condition on the measurement. That’s all fine—but when we’re not doing quantum mechanics, we don’t do that conditioning; we routinely estimate different aspects of a joint distribution from different experiments, indeed that’s part of the whole likelihood and prior thing. So I think there’s definitely something important about this example. I want to convey this without getting tangled in disputes that are occurring within physics.

]]>I think a big part of the reason QM seems so “weird” is that in the past, if you came in with a reasonable theory that doesn’t seem too weird (ie. non-realist, there are no real particles or they have no real location or whatever), like Bohm who had a theory of actual particles moving around, you would be stepping on the toes of all these powerful people who basically asserted the inherent weirdness of QM. Fermi, Bohr, Heisenberg and soforth.

Another aspect is that as a physicist it would be unusual for you to have encountered ideas like the ones explored by Kevin S Van Horn in his summary of Cox’s theorem: http://ksvanhorn.com/bayes/Papers/rcox.pdf

That is, you’d be unlikely to have spent much time discussing or understanding basic ideas in probability theory outside what might be called “naive” probability theory (you know, dice, and flipping coins and such). Not impossible, but just relatively uncommon, particularly for say a 3rd year undergrad.

So, when your professor says “in QM we do this calculation and then Psi^2 is the probability to find the particle at the given point” you’d just accept that without any problem, and you’d have the idea that “probability = how often it happens in repetition” without questioning it. It doesn’t seem problematic. And when you ask about how that happens you’d just hear “it’s an axiom of QM called the Born rule, and it’s been experimentally verified to 18 decimal places” or some such thing. These days undergrad books just give the QM theory as axioms and teach it like it’s unquestionable due to the extraordinary precision of predictions. Never mind that you can question what a calculation means without questioning whether you got the right numbers.

Of course, this just pushes things under the rug… So there’s usually some discussion, you hear about “collapse of the wave function” or some such thing, it all sounds mysterious, you assume that some high end people in the field probably understand it pretty well, and you spend your time learning how to grind through the formalisms to get reasonable calculations, which is *hard* and requires a crap load of learning. Out the other end of a masters or PhD program and you’re able to go to work for Intel designing semiconductor structures using first principles QM calculations, and you’re happy.

You will *not* spend your time thinking for a long time “under what circumstances could we define real actual particles with positions and momenta” because if you go down that branch you *will* fail your physics tests… and if you do that you’ll wind up in philosophy or math, where you probably won’t know enough physics to make real progress unless you’re quite exceptional.

So basically progress in the foundations of QM is left to a very small group of physicists who make it through grad school, get tenure, and then return to earlier questions that bothered them, or to a few high end philosophers of science or mathematicians like Edward Nelson who explored what he called “stochastic mechanics” but eventually gave up on it (though apparently possibly for the wrong reasons, he seems to have convinced himself that you can’t get entanglement from stochastic mechanics, but this isn’t exactly true I think)

anyway, end story: we’re stuck in QM in large part because it’s so hard to do useful QM that you have to spend all your time learning how to do it and can’t spend any time thinking about what the heck it means, also doing that would step on toes so there’s been in the past lots of social reasons not to.

Now that it’s several decades past when the Fermis and the Feynmans and soforth have died, and their direct students are dead or retired or about to retire… we’re seeing more active work on the foundations of QM because it doesn’t step directly on the toes of “great men”. The end result is that everyone seems to agree that no-one has the slightest idea what QM actually means, it’s a theory that predicts outcomes of experiments, but it doesn’t have a unique underlying physical model. Bohr basically insisted that it *couldn’t* have an underlying model and we *shouldn’t try*… which is why it’s taken so long to get people to think about the underlying model.

I think the best progress on QM will come from someone who takes a realist approach (particles exist and have positions and momenta), uses Bayesian probability to describe the information we have about the positions and momenta, accepts nonlocality as a fact of life, and tries to make progress directly on the consequences for describing QM in a relativistic context including gravity.

unfortunately it’ll require a very particular set of skills and background, and a certain kind of aesthetic sense that isn’t clobbered by an established view so it may take a long time.

]]>I really liked the Feynman biography by James Gleick, *Genius*, for its insight into the sociology of early quantum mechanics. Of course, there’s also *Linguistics Wars*; that one I know firsthand from working in its post-apocalyptic aftermath.

> “hidden-variable models.” These models are not in general true, in the sense that they do not apply in quantum mechanics.

Andrew, this is quite simply false. The big problem with bringing QM into all of this is that *there is no single thing called QM*. Specifically, while lots of people can calculate stuff and get the right answer for example experimental situations, there is no universal agreement on what it is that is being calculated or what the calculations mean about stuff in the world. There are in fact many models of what QM means… Copenhagen interpretation is not a single interpretation… but the various proponents have some various similarity. Bohmian interpretations are rejected not because they give the wrong answer, but because they give the right answer for reasons physicists prefer to reject (ie. it’s a political issue, they don’t like the nonlocality, and apparent faster-than-light effects, and more to the point, in the early years they couldn’t let themselves be associated with Bohm because of Joseph McCarthy’s witchhunt). Many Worlds just seems goofy to a lot of people. Decoherence seems like it’s just a way to say “QM is an asymptotic theory that only applies for “short” times, until things get mixed up and entangled with the environment”. Theories involving “spontaneous collapse of the wavefunction” require us to believe things that seem probably false, like subtle failures of conservation of energy etc etc.

There *ARE* many many examples of hidden variable theories in QM. There is even a whole book surveying them, which someone I met from this blog recently bought and read: https://www.amazon.com/Survey-Hidden-Variables-Theories-International-Monographs-ebook/dp/B01DRXPOAM

The only requirement for a hidden variable theory in QM is that it be nonlocal. This is what Bell’s theorem shows, that there are no local realist theories (ie. theories where the particle has a definite position, and goes through one of the slits but at the same time is not affected by nonlocal facts, like the second slit is open).

The review from the other commenter said the hidden variables book was a bit funny because it kept complaining about how people keep talking about hidden variable theories being impossible, when in fact there are quite a few of them, and the “proof of impossibility” by Von Neumann was debunked ages ago…

So, the QM stuff is a huge distraction, if it’s not essential to your larger argument, you should just drop it. If it is essential to your larger argument, you should read enough about some of the other theories of QM. I recently had recommended to me by Chris Wilson from this blog the book by Lee Smolin:

I got it electronically from my library. It’s quite good, and it goes into *all* of this kind of discussion, the different interpretation and what’s generally wrong with them… In particular I found that the main objections he brings up to Bohmian theory didn’t seem even the slightest bit objectionable. In fact one of the objections is that the wave functions go on forever spreading out and affecting all of the universe. This seems to me like a benefit of the theory rather than an objection. It’s obvious that there is only one single universe and not a split between “quantum stuff” and “classical stuff” which is built into many other interpretations.

The real challenge for Bohm is to understand the role of relativity with QM, but it turns out that this is a challenge for all of QM, and even relativistic theories like Dirac’s equation don’t fully resolve the issue, particularly in-so-far as there is no meshing of anything QM with anything gravitational. Smolin himself works on “loop quantum gravity” but it’s not a full theory.

If you take Bohm’s theory, then quite simply the world is like an MCMC chain… Little particles go zooming around according to rules that involve complex high dimensional calculations over configuration space. As they do this, they rapidly converge into an ensemble that has statistical properties equivalent to Born’s rule. Basically they converge to be distributed asymptotically as Psi^2. Thus Born’s axiom is not an axiom at all, and normal Bayesian probability applies to the position and momenta of particles. It just happens to be that because of the wave that the particles interact with, the tinyiest errors in our knowledge of position or momentum get blown up through time into the full distribution of possibilities that we see in the outcome (the flashes of light on the screen in a two slit experiment for example). This is no more weird than it would be weird that starting an MCMC chain at 1 vs at 1.00000001 would after many iterations lead to the “particles” being in totally different locations distributed according to whatever the density is that you’re using in your sampler.

The weird part of Bohm’s theory is that it requires a kind of “quantum information” essentially travels faster than light. However, it’s still impossible to send regular information over quantum propagation channels because it would require controlling the position of particles to well below the resolution that is actually possible.

Basically saying that “normal probability theory doesn’t apply” to QM is perpetuation of a political position of Bohr’s. Bohr and co browbeat the QM community for decades to try to eliminate all ideological opponents in a typically 20’th century cold war.

As a political scientist you might find that history interesting, and how much of what’s “weird” about QM is really about political power struggles within academia.

]]>Bell’s paper proceeds from the assumption that there exists a latent parameter lambda, the probability distribution of which induces an expectation over the observable measurement. This is realism or hidden variables, defined by EPR as

“If without in any way disturbing a system, we can predict with certainty (i.e. with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity”

So when you say:

> No. The only hypothesis needed for the Bell inequality is locality.

you are incorrect.

Combining realism with the requirement that measurement at B doesn’t affect measurement at A, or locality, you get a contradiction. Hence, what the paper rules out is “local realism” or local hidden variable theories. This purpose is stated in the introduction of the paper

“Moreover, a hidden variable interpretation of elementary quantum theory has been explicitly constructed. That particular interpretation has indeed a grossly nonlocal structure. This is characteristic, according to the result to be proved here, of any such theory which reproduces exactly the quantum mechanical predictions.”

“any such theory” being a hidden variable interpretation.

Local realism isn’t a term I made up, it’s a standard phrase. There exist local “non-real” theories that aren’t incompatible with experiment, though they’re all New-Agey “consciousness/mentalist” interpretations that aren’t very illuminating and sort of sidestep the issue. Nonetheless, they exist.

]]>I feel like your only error is the use of a physics example at all. It’s drawn out a lot of people who are less interested in a point about the dangers of building a probability model by intuition and more interested in demonstrating that they once took a course in quantum mechanics.

]]>David:

What statisticians call “probability theory” is what physicists call “Boltzmann statistics” or “hidden-variable models.” These models are not in general true, in the sense that they do not apply in quantum mechanics. In mathematics we say that a conjecture is false if there are any counterexamples to it. In that sense, probability theory is false. That said, probability theory is very useful! There are some settings such as coin flipping and die rolling where probability theory is evidently true, and other settings such as Bose-Einstein statistics, Fermi-Dirac statistics, and the two-slit experiment where probability theory, as it would be intuitively applied, is false. An open question is the applicability of intuitive probability theory in other settings. As we discuss in our paper, the failings of probability theory can be resolved by changing or expanding the probability model in various ways, but it would not be apparent ahead of time that such extensions would be necessary.

You write, “The first physics error in the paper is assuming that the detectors in experiment 4 do not affect the trajectory of the photons.” What we actually write is when discussing experiment 4 is, “putting detectors at the slits changes the distribution of the hits on the screen.” So, yes, we do say that the detectors affect the trajectory. In any case, this discussion is useful, because if such a basic point was not made clear in our paper, to the extent that you could read a statement as saying its opposite, then we have not written it clearly enough. So I appreciate the feedback.

]]>Giacomo,

That non-orthogonal subspaces come out from a tensor product with a much larger (possibly infinite) system is not a problem! It is the actual condition of experiments where quantum mechanics applies, in which you study a very small system, with ZILLIONS OF DEGREES OF FREEDOM in you instruments!

Tensor products weren’t mentioned and aren’t relevant to the particular technical and conceptual problems with that classical [“hidden variables”] construction which Holevo points out there. Tensor products – describing compositions of ‘small’ systems* – expose further problems with such constructions – most notably “spooky action” of course (see e.g. that monograph’s Supplement, which covers “hidden variables” in more depth).

[…]

“Quantum probability” (or “algebraic probability”) certainly isn’t just a name, and of course it doesn’t mean you should start applying it everywhere. Let me re-emphasise that *it contains* classical probability and of course you’ll want to keep on using the conceptually and technically simpler Kolmogorovian representation of that part of it wherever it’s appropriate. But the physics of the last century or so has revealed that the assumption that *everything* “at some point can be described deterministically” – the “classicality assumption” – is extremely dubious. Probably, there will be “no return to classical reality”.

* The tensor product description of a ‘small’ system of interest with a ‘large’ measuring instrument system is relevant to a very different problem: the “(small) measurement problem”.

]]>Quotes are from the paper:

“Probability theory isn’t true”: I’m not sure how this could be. The formula for conditional probability is a theorem. It is possible that probability is being applied to the real world incorrectly.

“This is all standard physics”: Perhaps. But, is this “standard” physics being explained correctly?

]]>The references in the paper do not include any of the items that one should read to understand the physics. The first physics error in the paper is assuming that the detectors in experiment 4 do not affect the trajectory of the photons. While this is a plausible assumption, theory and experiment say that it is false.

“in quantum mechanics there is no joint distribution or hidden-variable interpretation”: Whether this is true depends on what it means. It is true that the world is not local. It is not true that particles don’t have positions. The formulas say that the trajectory depends on things that are not just along the trajectory, e.g., detectors at slits that the particle does not go through. Experiment confirms this.

It is true that some of the criticisms of the claim have themselves been incorrect. But, this does not mean that all criticisms are incorrect.

]]>The hypothesis is locality, not “local realism”.

]]>The only way to go further than is to continuing doing that with more and more diverse audiences until you hit one that can point out how you are mistaken. And then repeat over and over again with the less wrong arguments ;-)

]]>