Don’t say your data “reveal quantum nature of human judgments.” Be precise and say your data are “consistent with a quantum-inspired model of survey responses.” Yes, then your paper might not appear in PNAS, but you’ll feel better about yourself in the morning.

This one came up in a blog comment by Carlos; it’s an article from PNAS (yeah, I know) called “Context effects produced by question orders reveal quantum nature of human judgments.” From the abstract:

In recent years, quantum probability theory has been used to explain a range of seemingly irrational human decision-making behaviors. The quantum models generally outperform traditional models in fitting human data, but both modeling approaches require optimizing parameter values. However, quantum theory makes a universal, nonparametric prediction for differing outcomes when two successive questions (e.g., attitude judgments) are asked in different orders. Quite remarkably, this prediction was strongly upheld in 70 national surveys carried out over the last decade (and in two laboratory experiments) and is not one derivable by any known cognitive constraints.

This set off a bunch of alarm bells:

1. “Universal, nonparametric prediction”: I’m always suspicious of claims of universality in psychology.

2. “Quite remarkably, this prediction was strongly upheld in 70 national surveys”: Quite remarkably, indeed. This just seems a bit too good to be true.

3. And the big thing . . . how can quantum theory make a prediction about survey responses? Quantum theory is about little particles and, indirectly, about big things made from little particles. For example, quantum theory explains, in some sense, the existence of rigid bodies such as tables, chairs, and billiard balls.

From reading the paper, it’s my impression that they’re not talking about quantum theory, as it’s usually understood in physics, at all. Rather, they’re talking about a statistical model for survey responses, a model which is inspired by analogy to certain rules of quantum mechanics. That’s fine—I’m on record as offering tentative support to this general line of research—I just want to be clear on what we’re talking about. I think it might be clearer to call these “quantum-inspired statistical models” rather than “quantum probability theory.”

As for the model itself: I took a quick look and it seems like it could make sense. It’s a latent-variable multidimensional model of attitudes, with the twist that whatever question was asked before could affect the salience of the different dimensions. The model makes a particular prediction which they call the QQ equality and which they claim is supported in their 70 surveys. I did not look at that evidential claim in detail. One thing that confuses me is why they are treating this QQ equality as evidence for their particular quantum-inspired model. Wouldn’t it be evidence for any model, quantum-inspired or otherwise, that makes this particular prediction?

It’s not clear to me that the quantum-inspired nature of the model is what is relevant here, so I think the title of the paper is misleading.

Here it is again:

Context effects produced by question orders reveal quantum nature of human judgments

I think a more accurate title would be:

Context effects produced by question orders are consistent with a quantum-inspired model of survey responses

Here are the explanations for my corrections:

1. Changed “reveal” to “are consistent with” because the data are, at best, consistent with a particular model. This is not the same as revealing some aspect of nature.

2. Changed “quantum nature” to “quantum-inspired model” because, as discussed above, it’s not a quantum model, it’s only quantum-inspired; also, it’s just a particular model, it’s not a property of nature. If I were to fit a logistic regression to some test questions—that’s standard practice in psychometrics, it’s called the Rasch model—and the model were to fit the data well, it would not be correct for me to say that I’ve revealed the logistic nature of test taking.

3. Changed “human judgments” to “survey responses” because there’s nothing in the data about judgments; it’s all survey responses. It would be ok with me if they wanted to say “attitudes” instead. But “judgments” doesn’t seem quite right.

Anyway, there might be something there. Too bad about all the hype. I guess the justification for the hype is that, without the hype, the paper probably wouldn’t’ve been published in a tabloid; and without the tabloid credentials, maybe our blog readers would never have head about this work, and then we wouldn’t’ve heard about it either.

43 thoughts on “Don’t say your data “reveal quantum nature of human judgments.” Be precise and say your data are “consistent with a quantum-inspired model of survey responses.” Yes, then your paper might not appear in PNAS, but you’ll feel better about yourself in the morning.

  1. I agree with Andrew about the hype-ish way that “quantum” is used. I’ve read a couple of papers about this, and the gist seems to be this:

    Quantum mechanics is distinguished by non-commutivity of certain pairs of physical quantities, such as (momentum, position) on a specific axis, or (energy, time). The measurement of one of the pair necessarily entails an uncertainty in the value of the other. These “quantum” statistical approaches try to reflect something analogous. So the measurement order of a pair of measurements could make a possibly major difference.

    The next step is to try to adopt the mathematical machinery of QM without necessarily claiming that any decisions or thought processes are “quantum mechanical” in nature. The machinery could crank out probabilities of responses, etc.

    If this were to apply in the world of surveys, the “measurement” of one question would introduce uncertainty in the knowledge of its corresponding pair (I don’t know how you are supposed to identify the corresponding pairs). Hence the importance of the order of the questions.

    • I think this is a reasonable summary of the approach.

      I, too, have followed the development of these models over the years and adopting quantum-inspired probability updating rules seems like a valuable approach to have in the modeling toolshed. Using those tools can yield models that are useful and concise descriptions of cognitive processes in some domains.

      That said, I share your distaste with how the word “quantum” seems to be used just to rile people up, since it is not a very direct description of the underlying model. It would be like describing a model that used classical probabilities as a “Soviet model” because Kolmogorov formalized those axioms and lived in the Soviet Union.

  2. ?! Did you write this article before reading my comments in this one? QT is [a generalisation of classical/commutative] probability theory. It’s about whatever it’s [appropriately] applied to.

    • > QT is [a generalisation of classical/commutative] probability theory.

      I thought it was more than that. Electrons and stuff.

      > It’s about whatever it’s [appropriately] applied to.

      Maybe the “quantum theory” label could be reserved for it’s application to physics then.

      • It isn’t more than that (see my comment for why). Its name is unfortunate though. It’s also sometimes called “algebraic probability” (better) or “noncommutative probability” (less better) but the real problem is that long-standing failure to recognise that it is probability theory rather than physical theory.

  3. This is a contributed paper (ie did not go through normal peer review process). Wish they would flag this more prominently…

    “Contributed by Richard M. Shiffrin” – usually contributed papers list the peer reviewers (that were selected by the NAS-member author) right here. But I guess they weren’t doing that in 2014 when this was published.

  4. I’ve been following the senior author’s work intermittently for some time now. The inspiration for these models seem to come from the use of drift-diffusion models of judgment and decision making, where they use elaborate solutions of Fokker-Planck equations to explain accuracy-speed trade-offs. Here are some resources where they do take this whole quantum thing very seriously
    + https://www.cambridge.org/core/books/quantum-models-of-cognition-and-decision/75909428F710F7C6AF7D580CB83443AC

    + http://www.psy.vanderbilt.edu/jdmlab/papers/YearsleyJMP.pdf

    As Andrew and Michael point out in their BBS commentary, Hilbert spaces may prove useful in applied statistics. However, they sincerely believe that the brain/mind represent uncertainty not using classical probability but with non-commutative algebras and Hilbert space representations of them. Overall, their claims seem to be a bit of a stretch, and based on an unnecessarily overly complex model.

    I am curious and looking forward to what others here think of these models.

  5. Andrew said,
    “3. And the big thing . . . how can quantum theory make a prediction about survey responses? Quantum theory is about little particles and, indirectly, about big things made from little particles. For example, quantum theory explains, in some sense, the existence of rigid bodies such as tables, chairs, and billiard balls.
    From reading the paper, it’s my impression that they’re not talking about quantum theory, as it’s usually understood in physics, at all. Rather, they’re talking about a statistical model for survey responses, a model which is inspired by analogy to certain rules of quantum mechanics. That’s fine—I’m on record as offering tentative support to this general line of research—I just want to be clear on what we’re talking about. I think it might be clearer to call these “quantum-inspired statistical models” rather than “quantum probability theory.””

    Having had a tiny bit of background involving some of the abstract mathematical models of quantum theory, I turned to Wikipedia and found the article “Mathematical formulation of quantum mechanics” (https://en.wikipedia.org/wiki/Mathematical_formulation_of_quantum_mechanics}.

    It starts, “The mathematical formulations of quantum mechanics are those mathematical formalisms that permit a rigorous description of quantum mechanics. ”

    Almost halfway down (after many definitions), it states:

    “One can in this formalism state Heisenberg’s uncertainty principle and prove it as a theorem, although the exact historical sequence of events, concerning who derived what and under which framework, is the subject of historical investigations outside the scope of this article.”

    My guess is that what Is going on is that that psychologists have encountered some oversimplified version of the above, and have made some interpretations that they think fit what they are doing. But, of course, they have no idea of what the above formalism of Heisenberg’s uncertainty principle is saying. (I don’t throughly understand it myself, although I understand at least some of the terminology involved and recall trying to explain some of it to a physicist many years ago. I would not be so foolish as to try to explain it to a psychologist!)

    • Knowing a lot of the folks working on these models, I can assure you that they have superb mathematical chops and I’ll personally vouch for their deep understanding of the material. But I think it is reasonable to question whether those models add to our understanding of psychology.

      Mathematical psychology has often relied on pretty sophisticated techniques, since minds and brains are pretty darn complicated. Rajesh, above, mentioned that models based on diffusion processes are very common in the field. Speaking from experience, it is easy to get to a point with the psychological models where one needs to invent new math that even the physicists haven’t touched yet.

      I think there are two bigger issues coming up in this discussion:

      1) The word “quantum” sets off a lot of buzzers and I don’t think it is even a good description of the core aspects of these models, even if they were inspired by the mathematics of some aspects of quantum theory.

      2) The models are meant to be interpreted in an algorithmic/computational way, rather than as a physical description of what is going on in someone’s head. E.g., the diffusion models that are very successful in explaining decision behavior do not assume that there’s an actual particle being buffeted around. Instead, they are based on the idea that there is a latent decision state whose dynamics can be described using the same equations that describe the physical process of diffusion. Same goes for the “quantum” models.

      • My conceptual problem about applying QM math to non-QM subjects is this – what in the heck is the square root of probability?? (or even complex-valued probabilities)? In QM, we all either put this off until some future time, since the equations work very well, quietly allow that there might be some kind of “magic” here, or try to come up with another formulation that ends up being equivalent.

        In the field of statistics and such? Hard to grasp. Of course, just because I can’t grasp the thing doesn’t mean that the QM math can’t apply, but still, it’s quite a step.

        • It’s not at all hard to grasp this “conceptually very straightforward,generalization of classical probability theory”. In particular there’s no need to wonder what the heck is the square root of probability (and there are no complex valued probabilities in it either).

          It’s deplorable that so long after von Neumann, Segal etc. such great ignorance of [these foundational aspects of] quantum theory is still widespread among physicists. What the heck’s the reason/excuse for it in an article and comments in this forum again I have no idea.

      • “they are based on the idea that there is a latent decision state whose dynamics can be described using the same equations that describe the physical process of diffusion. Same goes for the “quantum” models”

        Is there any empirical or other evidence to back up this idea? If not — the idea doesn’t seem scientifically credible.

        • A fair question, I’d say the short answer is “yes”, but of course that’s not a very good answer!

          I should say first what is the data that these models explain: Given a set of options, they jointly explain the probability with which someone chooses any of those options as well as the distribution of times at which someone makes each of those choices.

          With respect to yyw’s comment below, this kind of data represents extremely strong constraints on the models: It is a joint continuous distribution over N dimensions (where N is the number of choice options). And generally speaking, these distributions are estimated from thousands of trials for an individual participant, which means many nitpicky aspects of the distributions like tails, leading edges, multimodality, etc., can be characterized reasonably well. But it also means that if you’re doing this kind of research you have to rely on good friends and/or devoted spouses to keep people in the lab for that many sessions.

          From a theoretical perspective, although random walk/diffusion models had been popular in explaining choice for a while (e.g., Stone, 1960; Edwards, 1965; Link & Heath, 1975), they didn’t really break out until Ratcliff’s 1978 paper which also emphasized the importance of accounting for entire distributions instead of just the first few moments.

          The basic idea is that there is a latent decision state representing the balance of evidence in favor of each of the options. As the decision maker accumulates evidence (from perception, memory, etc.) that state evolves over time. Eventually, it will hit a boundary, and there is one boundary for each of the N options. The time at which the state intersects the decision bound is the predicted decision time, and which boundary it hit determines the response.

          This representation makes it easy to conceptualize things like the difficulty of the decision as well as biases. More difficult decisions are ones where the evidence doesn’t make a beeline for a boundary. Biases happen when the boundaries are not evenly spaced, such that less evidence is needed to reach some decisions than others.

          There’s been a huge amount of work put into explaining the sources of variability that produce these decision dynamics, like variation in attentional, perceptual, motor, and cognitive processes. My own perspective is that the diffusion process is a statistical approximation to these underlying sources of variability (this perspective has inspired much of my own work).

          But there’s also been a lot of recent work finding individual neurons whose dynamics resemble exactly what you’d expect if they were representing a decision state that evolved over time until reaching a boundary (Hanes & Schall, 1996; Roitman & Shadlen, 2002). There’s still a lot of questions about how “literal” this resemblance is, but it’s a pretty nifty correspondence; Jeff Schall has a good recent review of this work (http://www.psy.vanderbilt.edu/faculty/schall/pdfs/Schall_TINS_2019.pdf).

          Anyway, I suppose it was silly of me to try to summarize 60 years of work in a blog comment, and I’m sure I’ve left out a lot. But hopefully this cursory summary is of some use!

        • Thanks for the explanation. I’ll still hold on to some skepticism, though — “All models are wrong, some are useful” — but interpreting that phrase really begs the question of, “Does the model really model the reality?” And since the reality is not known (after all, finding out what the reality is is really what we are trying to do)– well, things are at best still “early stage”.

        • +1 on the skepticism.

          I think these models have been very useful, but I also think their utility is limited. Although they provide excellent fits to the data they are meant to explain, as I said, I think the diffusion process should be viewed as an approximation to the cumulative effects of other perceptual/cognitive/motor processes.

          So diffusion models are a good “entree” into beginning to characterize those underlying processes, and I consider it the job of the field (i.e., my job) to dig into and develop models of those processes and how they interact. This will also put us in a better position to understand the relationship between the neural activity and the information they represent.

        • gec,

          I’m having trouble wrapping my head around your phrase “the data they are meant to explain”. I see a model as a model of a process. In practical usage, the process is what produced the data. So I would say that the model is intended to “describe (or perhaps explain) the process that generated the data”. (Note: What I mean by “generated the data” here is *not* the process by which the data were collected)

        • When I say “the data they are meant to explain”, I’m just trying to define the scope of what these models do, so that they are not asked to do more than they are capable of.

          I might also be being obtuse in using “explain” vs. “describe”, where I use “describe” to refer to the process, and “explain” to refer to the outcome of that process.

          So perhaps it would be best to say that diffusion models describe, at a certain level of abstraction, the process by which decisions between multiple alternatives are made. In providing such a description, they help explain phenomena that are present in the joint distribution of response times to each alternative. These include things like the shape of those distributions and how they covary with the difficulty of the choice as well as any biases a participant may have.

          I think a good analogy might be to Newton’s law of gravitation. Newton’s law describes how pairs of objects affect one another via gravity, when provided with their masses, distance, and a constant. As a result, it explains phenomena present in observations of the relative motion of objects. But Newton’s law doesn’t say where those masses come from, or why the force depends on the square of the distance.

          Same goes for the diffusion models, and why I wanted to be careful about defining their scope. They are good at a certain level of abstraction, but I think it’s important not to stop there. E.g., the diffusion models don’t say *why* biases occur or *why* certain options might not exert as much force on the decision state, just what happens conditional on those things.

      • Classical mechanics became insufficient when more advanced measurement tools became available. Given the crude measurement tools used in psychology, it is hard to imagine the need for diffusion models.

      • I once sat in one of the talks by one of the authors, and he mentioned that they were exactly following quantum probability, the mathematics of quantum theory (not the physics). So here you go about 1) from their perspective. They didn’t invent the probabilty theory and the theory is NOT not only applied to physics–and is not owned by physicists LOL, just as the classical probaiblity thoery is not just applied to classical physics.

        • They didn’t invent the probabilty theory and the theory is NOT not only applied to physics–and is not owned by physicists LOL

          Well it was invented by von Neumann and some other great mathematicians / math. physicists, but it’s no use, “a student in SBS”: I’ve tried explaining these facts several times (and at some length) before in this forum (see above).

    • To me, it intuitively makes sense that we might get different outcome when two successive questions are asked in different orders. Thinking about the first question could very well influence how one answers the second — that seems so plausible, just given human nature. However, the idea of trying to get a “universal” model for a “second question” effect doesn’t make sense to me — since the effect may be different for different pairs of questions.

      • I was under the impression that this was a standard technique for getting the answers one wanted on political questions. Prime the subject by asking “Did you know that is responsible for of murders and other violent crimes?” And then ask “Do you support integrating your neighborhood?”. Inversely, a bunch of questions on how the mentally ill commit almost no major crimes but are the victims of a horrifically large number thereof before trying to get support for mental health facilities.

        Again, I was under the impression that this was know to be effective.

        The interesting question is what other issues you can play this game (get the subjects’ emotions stirred up and then ask a question related to said emotion). E.g. Have you ever been dumped? How bad did it hurt? Do you think cheating in marriage should affect alimony awards?

        As you can guess, the sort of theory I would like to see is one that speaks to the content of the material, and feel that a theory that uses obscure mathematics but is content free would be inanely stupid. But that’s me.

        • My impression is that the technique is also used by police investigating a suspect, and attorneys in questioning witnesses. Indeed, these seem to be what is classified as “leading questions,” just as teachers can use (although not always successfully) Socratic dialogue to “lead” a student to a desired (at least by the teacher) understanding.

      • Of course it makes intuitive sense. This is another case of researchers coming up with an explanation for something that needs no explanation. We’d be surprised if it weren’t true.

        Classic example: “Rank your feelings about your health care provider” and “describe the worst experience you’ve had with your health care provider.” It would be shocking if changing the order didn’t change the ranking.

        Research that comes up with convoluted models for understanding what was already obvious is not good research.

      • by reading their SI, it seems they’ve compared some other existing models, and none can explain the QQ quality. The gist is that it’s not ONLY the question order effects, but how the model predicts that if the order effects occor, there is QQ relationship. I’m a student in SBS (trying to figure out what to focus on for my graduate program), so I do not know a lot about these models, but I think I get it that it is been hard to have such predictions that can be tested using an equality (the QQ equality as they named it), as most modeling in SBS, by my reading, is just parameter fitting, comparing parameter (distributions), based on goodness of fit, etc.

  6. Oy. It sounds like a typical quantum bullshit explanation.

    I’ll believe that something might be “quantum” when they show it violates the Bell Inequalities, like genuine quantum phenomena do. It’s simple to do the experiment. You just use pairs of people with whose answers on three different questions are highly correlated with each other, put them in remote rooms, scramble the question order,…. (yes, I know that no such human pairs are likely to exist, unlike the spinor offspring of S=0 particles.)
    When that’s done, if “spooky correlations at a distance” are found, then let’s talk quantum. Meanwhile, I’ll offer 1000/1 odds on the results. It would be 1,000,000/1 except at that level the motivation for sophisticated fakers would be to big.

  7. “Quantum theory is about little particles and, indirectly, about big things made from little particles. For example, quantum theory explains, in some sense, the existence of rigid bodies such as tables, chairs, and billiard balls.”

    Sorry to be argumentative, but…

    As someone whose undergrad minor was materials science, I submit that this “existence of rigid bodies” bit is not at all right. For example, in one of our courses, the prof. passed around a substantial bar of aluminum that was a zone-refined (extremely high purity) single crystal. Being a twat, I wondered how strong it was, and pulled on it with both hands. Despite my also being a wimp, it stretched. And was no longer a single crystal. That single crystal, high purity aluminum is only slightly stiffer than butter, or that you can make airplanes from precipitation-hardened aluminum alloys, isn’t really “predicted by quantum mechanics” in any serious sense. That you can make crystals is, of course, QM. But how those crystals behave at macro scale isn’t. Wood is a composite material that is stronger than steel for structural applications on a per-unit-weight basis. QM doesn’t tell you that.

    • DJL,
      Most scientists believe, and all physicists know, that those properties you’re talking about are indeed determined by quantum mechanics.

      But I’m guessing that when Andrew mentioned rigid bodies in this context he was referring to the fact that objects can’t pass through each other in spite of being mostly empty space in the classical picture, and that this can be explained (with some hand-waving) via the Pauli Exclusion Principle.

      • Phil:
        Phil,
        Regarding your initial statement. I suggest you read Philip Anderson’s short (4 pages) anti-reductionist article “More is Different” (1972). You’d then reconsider your statement. ‘Determined’ is not ‘explained by’. Admittedly, this topic is very far from the initial point of the post, or maybe it isn’t.

        P.S.
        Excerpting from Peter Woit’s recent obituary notice on Anderson, “It’s not hard to make the case that Anderson was the most important condensed matter theorist of the twentieth century. … Most importantly, what is often called the Higgs phenomenon really is a discovery of Anderson’s, and this should have been recognized by a second Nobel for him (he already had one for some of his work in condensed matter). … 1972 paper More is Different providing a strong critique of reductionism and emphasizing the importance of emergent behavior.”

        • Exactly. (Thanks for doing my homework. The point that QM supports chemistry, but neither predicts nor explains what chemists can do, that chemistry supports biology, but neither predicts nor explains what evolution (or neurons) can do, have long been clear, but I’m an amateur in these things.)

          QM neither can neither predict nor explain (anything in material science, including) that the wings would fall off of an airplane made of pure aluminum, nor that if you make an Al-Cu alloy with a tad extra Cu and heat treat it so that the Cu precipitates out within the grains, you get a superb engineering material. And it doesn’t explain that when some twat brazes such an Al alloy, the Cu goes back into solution and the Al part fails (a metallurgist friend actually had this problem recently in an airplane control rod: testing showed the part would withstand several orders of magnitude more cycles that it would be subject to, but in actual flight it was failing.)

          Whether QM even has anything to do with that extra Cu precipitating out* as you cool a (relatively) high-Cu content allow slowly is problematic. That’s thermodynamics, not QM.

          *: The precipitates stop dislocation motion within the individual grains (crystals) of the Al. That dislocation motion mechanism is the reason that the QM calculation of how strong an Al crystal should be overestimates said strength by several orders of magnitude. I’m fond of this example because it’s (a) important and (b) shows how irrelevant QM is.

Leave a Reply to Tom Passin Cancel reply

Your email address will not be published. Required fields are marked *