Alison Mattek on physics and psychology, philosophy, models, explanations, and formalization

Alison Mattek writes:

I saw your recent blog post on falsifiable claims. For the past couple of years I have been developing a theoretical framework that highlights the importance of unfalsifiable claims in science. I try to also make a few unfalsifiable claims regarding psychological variables.

Here is Mattek’s paper, “Expanding psychological theory using system analogies.” It reminds me a bit of the writings of Paul Meehl.

51 thoughts on “Alison Mattek on physics and psychology, philosophy, models, explanations, and formalization

  1. 1. My awe of James Clerk Maxwell grows every time I see him discussed.

    2. So refreshing to read this about “brain scans”: “The fact that we see a pattern that replicates tells us nothing more than that there is a pattern that repeats itself. A mountain of work still lies ahead when we ask: how can we develop a scientific model of the pattern?” (Although I would have cut the words “that repeats itself” from the first sentence and “still” from the second sentence, as redundant.) So many neuroscientists, speaking publicly, call fMRI patterns “brain activity” as if that means anything. We have no idea.

  2. Very interesting paper!

    This is what I meant when I wrote a week ago here about how humans perceive time: I wrote that because ancient humans experienced time on many different scales (minutes, days, seasons, years and possibly decadish), the likely have an evolved ability to adjust for different time scales by, in effect, altering the coefficient of a linear equation about how they perceive time.

    Like in Mattek’s paper, the idea implies that a psychological process can be accounted for in the same way as or with a set of equations normally used to model, a physical process.

    Is it possible that the human brain has evolved to function by similar principles to the physical world because it’s more efficient to have only a single, widely generalizable fundamental model than it is to have many unique models? Chomsky and Pinker postulated an “innate grammer” which, as I recall,they believed was an intrinsic intellectual structure of human understanding. However, later Terrance Deacon developed the idea that grammar doesn’t have to be innate because logically there is only one way to relate cause and effect and without that, there is no language.

    On another topic, check out this:
    https://www.npr.org/sections/health-shots/2019/07/21/743408637/how-microexpressions-can-make-moods-contagious

    vs. this:
    https://journals.sagepub.com/doi/full/10.1177/1529100619832930

    Am I missing something? Are these two ideas at odds or compatible?

    • Time is not perceived, it is only remembered. I’m not sure how this relates to language or emotional expression, except that the brain seems to project all these phenomena into a space-time framework.

  3. Interesting. I can see how the emphasis on measurement is reminiscent of Meehl’s work. And the general tone certainly brings to mind Wigner’s paper “The Unreasonable Effectiveness of Mathematics in the Natural Sciences” (see https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness_of_Mathematics_in_the_Natural_Sciences).

    However, I can’t help but wonder if the type of thing Mattek discusses is analogous to discussing motion of a falling object by deriving an equation from F = ma, but neglecting to mention that in most real life situations, this is not a good way to predict when or where the object will land, because it neglects air resistance, air currents, etc.

      • To quote from your link:

        “Textbook statistical theory is like the physics in an introductory mechanics text that assumes zero friction etc. Friction can be modeled but that turns out to be a bit “phenomenological,” that is approximate.”

        • Actually, I have often done the following on the first day of a statistics or probability class for people with enough math background:

          I remind them how to derive the equation of a falling object from the equation F = ma, then use that to calculate how long it should take for an object to drop from a height of 6 ft. Then I say, let’s check out the formula with an experiment actually dropping an object. I ask for a couple of people with suitable watches to time how long it takes the object to drop, and for another volunteer to drop the object. When everyone is ready, I pull the object out of the bag that I’ve had under the desk: An inflated balloon. It gets a laugh. And if the AC is on in the room, wow how the measured time can vary! But I hope it makes the point and helps them remember the point.

        • We could have a had a blast co-teaching a course in physics + statistics. For a class in computer programming in Matlab for engineers I developed an introductory project where we derived the equations of motion for a spherical projectile in 2D through a realistic fluid, using a phenomenological fit to the experimental drag coefficient data, and then they would implement it using Matlab’s built-in DiffEQ solvers, and have to answer questions like at what angle should they shoot a tennis ball to maximize the distance traveled, and what was the maximum distance traveled as a function of the initial velocity and stuff like that. It would have been fun to actually have them shoot pingpong balls out of some kind of Nerf gun type contraption and measure the results and compare using statistical methods to the predicted values….

          This was the kind of stuff I always loved to do as a teacher. It’s why I got terrible TA reviews and was booted out of teaching, because the students all wanted me to solve textbook problems at maximum speed while they passively watched and tell them what problems were going to be on the midterm exams… sigh

        • That’s awesome, I love how well those observed trajectories match the kind of stuff you get from the ODEs, and I also think it’s great how they look qualitatively like Aristotle’s theory of projectile motion predicts…

          There are also some great biological mechanics problems discussed in Steven Vogel’s books.

    • Certainly Newton’s laws are not useless simply because they are only approximations of real data. If you want to build a bridge, you have to know them.

  4. When I write to editors and tell them I “enjoyed” reading that paper and reviewing it for them, I’m usually lying. In this case, I actually did kinda enjoyed reading this. And not just for comment (Stuff I liked (2)). Thanks for sending us something meaty and thoughtful and imperfect…which is basically the best academia has to offer. Below find some comments on the paper on stuff I liked, stuff I didn’t like and stuff I’m not sure if l like or not. I offer them as a thanks more than as suggestions; it is just that I’m terrible at thank yous.

    Stuff I liked:

    1. That sentence Kyle C. noted above.

    2. Figure 1 Panel C, for the part where they shock the mouse with the lightning bolt. I’m gonna use that in a DAG some day…if I ever use a DAG.

    3. “Consider that simply knowing about the general two-body problem and its solution represents a type of scientific knowledge.” This is great. It reminds me of a paper I heard discussed recently that traced some of Einstein’s thinking about relativity to Hume’s thinking about the “observer/object” distinction. I don’t know what to call these sort of foundational building-blocks/thought-experiments that are essentially meditations that open up new possibilities for understanding relationships between things, but they should probably have a name.

    4. If our current structural-modelling-based applied economists cared as much as this work about how models, definitions and relationships in the math do and don’t map on to ideas we actually have about the world, I’d be more likely to read more of it. If we are gonna go all idealized math on the world, I think there is a responsibility to translate back/forth between the math and world, not just in a “I can name this thing” sense but in the “the thing I name X and the way you understand X in ordinary language relate in this way and diverge in this way, and this affects interpretation in this manner” sense. And this paper takes that translation problem seriously (maybe as it’s actual core contribution, see Stuff I don’t like (3)). I appreciate that, and enjoyed the background in the history of science used to set up and explore that problem.

    Stuff I may or may not like, I’m not sure yet:

    1. Should we take an idea we have some sense of (say, “learning”) and then go searching for a measure of it in the real world…or should we define something measurable and then go looking for a word that approximates the thing we are measuring? Somehow these both feel wrong? Like “alpha” and “attention” should just be called “mathematical” and “linguistic” definitions attention? The thing that the “linguistic” version has going for it is that I know what it means in the world; the thing the “mathematical” definition has going for it is that I know what it means in an accounting/decomposition exercise.

    2. “All behavior is part of a natural circuit.” I can’t decide if this is insightful, tautological, or epistemologically imperial. Or all 3.

    3. “This is important because equations can be empirically verified, whereas verbal definitions can only be intuitively agreed upon.” I mean, the William James pragmatist in me says great, resolving ambiguity helps science move forward (see: squirrel and tree; alternatively, see squirrel and moose). On the other hand, I read a paper the other day that mathematically defined something that looked a lot like savings across time and then interpreted it in terms of a bio-physical metabolism response…which, I don’t think that was particularly helpful at all, and the ability to say “look there it is in the math” also doesn’t help, because it doesn’t actually relate to any thing in the world we might care about. And in general I’d think I have a better idea of what “surprise” might mean from my socio-linguistic experience than from math, and I wouldn’t think a mathematical definition of something called “surprise” would give me real insight into what we mean by that term (and hence the measurement of this other quantity is not necessarily super interesting to me). Maybe the mathematical thing is more “real” but even if it is it wouldn’t be what we mean when we say “surprise”.

    Stuff I don’t particularly like:

    1. I’m still not sure that social science should pretend towards physics.

    2. I dont like that I suspected the phrase “free energy” was going to show up, and then it did. Because now I’m like “I don’t think minimizing surprise means anything outside of math.” Unless we want to call “surprise” something we can measure by variance or whatever. See “Stuff May or May Not Like (1)”. Also, I don’t pretend to understand that guy. Not that I’ve really tried real hard, so…

    3. This paper does a thing my papers sometimes do in drafts (hence why it is here in stuff I don’t like, because it’s my own biggest writing failure) where the paper/speaker actually cares more about the philosophical problem/insight than it does about the [discipline here] insight that is the titular purpose of the paper. It isn’t just that the referees and editors hate it, it is that it tends to fail on both points by not taking either quite seriously enough. But I still want to write those papers, so…

    4. I’m not sure attention is really like heat. Or maybe I’m not sure that they are both real and not-real in the same ways.

    5. “That is, a single cilia of an inner ear hair cell is an analog sampling filter, operating as an ideal bandpass filter that resonates at a single frequency.” – I don’t think the word “ideal” belongs here, and it worries me that it would appear. It is bad enough when economics divorces “mathematically ideal given a model” from “the ideal we want as society” (again see “Stuff I may or may not like (1)”), but somehow it bothers me even more when it stems from some evolutionary perspective where it is not at all clear what “ideal” might even mean. Nature has no objective function…unless you really wanna double-down on that free-energy principle thing.

    6. “Second, there is great convenience to be found in translating dynamic systems of any domain into the domain of electrical circuits” I’m not sure we have the same conception of convenient. Maybe you could put it into a differential-equation for me? (#jokes).

    • In physics an “ideal bandpass filter” is one which has response 1 in the pass-band and response 0 outside the pass-band, so basically something that can only be excited by the frequencies it was designed to be excited by. That’s probably a different concept of “ideal” than the one you had in mind.

      • So what you are saying is that in this context the mathematical meaning of “ideal” and the linguistic meaning of “ideal” are totally different? Huh. Which one matters more?

        Said another way: it was bad enough when we had replication bullies here, but now we have the mathematical-grammar police too. Never satisfied.

        Said another another way: thanks for the clarification, I was totally wrong.

        • I don’t know, I guess “ideal bandpass filter” makes sense to me, it idealizes the idea of a bandpass filter….

          Merriam Webster gives definition of

          ideal 1b) “conforming exactly to an ideal, law, or standard: PERFECT // an ideal gas

          and 2a) existing as a mental image or in fancy or imagination only

          so I think it’s the concept of “idealized” or “perfect”, the “platonic ideal” of a bandpass filter is that it responds only to the frequencies in its band.

          rather than say the ideal as in “the thing we want the most” as in “my ideal sandwich has both mustard and mayo”

      • Maybe Paul’s mathiness is like the evil twin of Alison’s project – one uses math to obscure the world, the other uses math to illuminate it and provide structure for interpretation. This may have to do with the role of the math in the argument: Paul worries (I think) that economists use math for its own sake and then pretend whatever they come up with must explain the world because #math; Alison (I think) wants to take the world itself seriously and try to formalize a math for describing and measuring it. So maybe mathiness is the backward version of formalization, and I see formalization as potentially useful but with real limitations, and I see mathiness as a way to show how smart and dumb you are at the same time.

        Or maybe I’m just projecting. I haven’t read the Mathiness paper in a while, and I didn’t ever have a chance to go through it carefully and critically.

    • “should we take an idea we have some sense of (say, “learning”) and then go searching for a measure of it in the real world…or should we define something measurable and then go looking for a word that approximates the thing we are measuring?”

      We should observe, measure, compare, theorize, model, test and iterate.

    • I’m so glad you enjoyed the paper. Thank you for your thorough comments, it will take me some time to digest them all. In brief, I would say that all science must tend toward physics, even social science. This is because all observations happen in the physical domain. That is what an observation is–a physical reality. If you are talking about something that is not a physical reality, then you are not doing science. Period.

  5. From the paper: “The proportionality of salienceand surprisefollows the structure ofNewton’s second law, andwouldthereforebea special case of the law of conservation of energy. ”

    Newton’s second law is about conservation of momentum, not energy.

    It feels like this paper is about someone waking up to the idea of mathematical modeling in a serious way… it’s kind of cute, like watching a puppy first encounter a lake or something, but I don’t think it’s groundbreaking in mathematical modeling circles, yet it’s nice to see someone realize you can take modeling seriously and maybe encourage people to do so in the area of psychology and/or social science in general.

    • Daniel said,
      “It feels like this paper is about someone waking up to the idea of mathematical modeling in a serious way… it’s kind of cute, like watching a puppy first encounter a lake or something, but I don’t think it’s groundbreaking in mathematical modeling circles, yet it’s nice to see someone realize you can take modeling seriously and maybe encourage people to do so in the area of psychology and/or social science in general.”

      That more or less describes the impression I had, but I was not able to put it into words. But it reminds me of a story a friend once told me: Someone with a little boy about 4 yrs old was visiting the U.S. The child didn’t know any English when he first came here, but the friend offered him English words for some things. All of a sudden, the kid clearly caught on that this was another language, and started pointing to objects and giving a questioning look until he was given the English word for the object he was pointing at. Kinda like forming a conjecture and looking for confirmation that it worked in other cases.

      • Andrew Wilson’s suggestion about Kruschke’s book is pretty good, specifically as an introduction to Bayesian models. But I think Bayesian models are an inference layer on top of other kinds of models, like difference equations, differential equations, algebraic equations, PDEs, cellular automata, etc etc

        For an introduction to the idea of mathematical modeling of mechanism, I’d recommend three books from Cambridge Univ. Press

        A.C. Fowler. Mathematical Models In The Applied Sciences
        Sam Howison. Practical Applied Mathematics
        G.I. Barenblatt. Scaling

        These are introductions to ideas of how to describe physical phenomena with equations, and how to examine and analyze those equations.

        It also helps to have some background in physics. I really like The Feynman Lectures on Physics Vol 1 as a relatively more deep intro to physics than what you’d learn from say a typical undergrad textbook like Halliday Resnick and Walker for example.

        • Daniel writes:

          Bayesian models are an inference layer on top of other kinds of models, like difference equations, differential equations, algebraic equations, PDEs, cellular automata, etc etc.

          I agree. That’s one thing I find frustrating in those discussions with Judea Pearl. He seems to want to frame multilevel modeling and Bayesian inference as in opposition to his graphical model framework for causal inference. But, to me, I think of multilevel and Bayes as an inference layer on top, something that could be applied in various causal and non-causal settings.

        • By jointly defining a Bayesian reference set and and (hierarchical) data generation reference set for the pertinent set of difference equations, differential equations, algebraic equations, PDEs, cellular automata, etc etc. ?

        • Where do likelihoods come from? This was the question that changed my view of what Bayes is. When you have an ODE or a cellular autonoma or a spectral representation of a PDE or what have you, you have a predictor. But data is messy and models imperfect, whatever precise values you predict the data will always differ by some quantity. A Bayesian likelihood describes plausibility weights over differences between predictions and actual measurements. That’s all it does. You can put as much or as little structure as you like into this plausibility weight function…

        • The phrase data generation model (part of where likelihoods come from) should not be taken to mean how the actual data were generated but rather how fake data that emulates the actual data generation process well can be repeatedly made. If you want to call what goes into that – plausibility weights rather than probabilities – fine.

        • I think the “fake data that emulates the actual data generation process well can be repeatedly made” is a consequence of the isomorphism between the mathematics of weights and frequencies in random sequences. I don’t think the repeatedly making fake data is essential to the concept though. It is an extremely useful computational technique though, and it can be very helpful for thinking about how to specify the model.

          The main thing is to get a fit to data it’s sufficient to say that data in one region agrees less well with the model than data in another region. I see the agreement between what the model “thinks” should happen and what actually happens as the key ingredient to the likelihood. Most ODEs and soforth produce deterministic predictions, given a set of initial conditions and a set of coefficients, you get a single output object. There is no notion of repeatedly running the model and getting different results. Because that single prediction will never be precisely right in every way, we need a way to quantify whether the differences are more or less extreme compared to our expectations for their size or type.

    • Momentum is a form of energy, so conservation of momentum is an *instance* of conservation of energy. This paper is not so much about mathematical modeling, but about creating mathematical definitions of psychological variables, which is an additional leap away from the data. Definitions are a matter of convenience, so like geometric axioms, they are ultimately unfalsifiable. I’m so honored you felt the need to clarify that the paper is not “groundbreaking”.

  6. Where does equation 1 come from?
    What assumptions is it derived from?
    Why isn’t it fit to any data to show us how it behaves?
    What is the relationship to the law of effect?
    How can you have a paper about this without mentioning Thorndike, Thurstone, or Gulliksen?

    I would start here and follow the references: https://link.springer.com/article/10.1007%2FBF02289265

    I don’t think you need any analogy to physics, just look at what psychologists were doing before NHST ruined the field.

    • Anon:

      I think there’s an interesting intellectual history to be done here, tracing from “psychophysics” and “psychometrics” from circa 100 years ago which uses physics-inspired mechanistic or quasi-mechanistic models, to “behaviorism” from circa 100 years ago whose models seem to me to be more Boolean-inspired (do X and observe Y), to “judgment and decision making” from circa 50 years ago whose models were derived from psychophysics, to “behavioral economics” and “evolutionary psychology” which are typically studied in the way that ESP has been studied for so many years, using null hypothesis significance testing with any underlying model being a black box of no inherent interest.

      • I’m not sure I see much of a progression. I think it was more “punctuated”. Eg, all of a sudden you have NHST available along which allowed the “cognitive revolution” to take hold. NHST allows you to easily draw conclusions about any invisible unmeasurable thing you want of course.

        • Anon:

          I disagree. The whole point of the “cognitive revolution” is that psychology is not a “black box.” Looking at the NHST, black-box, push-a-button subfields of psychology, I would not call these “cognitive” at all. For example, the study that claimed that single women were 20 percentage points more likely to support Barack Obama during certain times of the month . . . that was not cognitive psychology; if anything it was anti-cognitive psychology. Similarly, much of evolutionary psychology seems anti-cognitive or pre-cognitive to me, in that it’s all about our actions and beliefs being driven by hidden forces beyond our control. Again, consistent with black-box models and NHST statistics.

        • Well, 1959 was soon after the origins of the cognitive revolution and here is what “old school” psychologists were complaining about already:

          The usual application of statistics in psychology consists of testing a “null hypothesis” that the investigator hopes is false. For example, he tests the hypothesis that the ex perimental group is the same as the control group even though he has done his best to make them perform differently.Then a “significant” difference is obtained which shows that the data do not agree with the hypothesis tested. The experimenter is then pleased because he has shown that a hypothesis he didn’t believe, isn’t true. Having found a “significant difference,” the more important next step should not be neglected. Namely, formulate a hypothesis that the scientist does believe and show that the data do not differ significantly from it. This is an indica tion that the newer hypothesis may be regarded as true. A definite scientific advance has been achieved.

          Mathematical Solutions for Psychological Problems. Harold Gulliksen. American Scientist,Vol. 47, No. 2 (June 1959), pp. 178-201

          Here is one of the supposed originating papers and I can find no fault with it though: http://psychclassics.yorku.ca/Miller/

    • I’m pretty sure there’s a typo in equation 1. To be dimensionally consistent, overall the right hand needs to have units of $V/t$.

      Inside the bracket on there is
      $ \alpha_C \beta_U – V $
      so $ \alpha_C \beta_U$ must have the same units of $V$. Therefore the right have side has units of $V*V$, not $V/t$.

      Equation 5 from the reference you linked is better as is dimensionally consistent and rate of change of V saturates with for large $alpha_C$.

      • I think equation 25 is more comparible, but to be honest it isn’t clear to me what “Associative Value (V)” really refers to:

        Here, associative value is defined as the psychological connection of a CS with a US, which can be measured via overt behavioral and physiological response patterns that correspond to experimentally controlled CS- US contingencies in time.

        From that I would think you would measure it by checking the frequency by which the the CS follows the US, which should map to the “probability of the correct [learned] response”. The variable e in that model would more be how many times the correct [supposed to be learned] response did not occur.

      • Thanks for this comment. You are probably right about the typo, my intention was to make Eq 1 a modified version of the Rescorla-Wagner model, adding alpha to the asymptote parameter and writing it as a traditional differential equation in the continuous time domain. Thanks!

  7. Really nice paper, I surely enjoy efforts in modeling psychological phenomena. (I specially like Meehl’s “The power of quantitative thinking” as a justification why this is important for a field such as Psychology). Another nice example, treating panic disorder as a dynamic system, with simulated data and all, is given in “Advancing the Network Theory of Mental Disorders: A Computational Model of Panic Disorder”.

    I have some objections about he whole idea of unfalsifiable claims and the assumed link between Popper’s demarcation criterion and NHST:
    (1) Meehl has repeatedly objected to the notion that NHST is a clear operationalization of the falsifiability criterion — see his “two knights” paper (“Theoretical risks and tabular asterisks…”), and, especially in comparison to physics, his paper about the paradox of theory testing in psychology and physics (“Theory-testing in Psychology and Physics…”).

    (2) Maybe the heuristic power of analogies cannot be itself falsified (it’s a framework to build models and derive observable hypotheses); but the whole set of equations taken from the established analogy between classical conditioning and physics models are amenable to be “empirically verifiable”, as Mattek himself writes in the paper. In fact, Popper’s criterion about falsifiability does not mean that universal statements are falsifiable by themselves alone: they are falsifiable as long as we can derive from a set of universal statements and some initial conditions a set of statements that the posited hypothesis forbids. This is exactly what Mattek proposes in the paper, admiting that such model can be used to simulate data (see how it’s done nicely in the panic disorder paper above), or, putting in another way, the proposed set of equations are able to make observable predictions as long as we input some initial conditions (like initial parameter values in the equations).

    P.S.: An earlier version of this comment did not get published, probably because the copious amount of links.

    • Meehl has repeatedly objected to the notion that NHST is a clear operationalization of the falsifiability criterion — see his “two knights” paper (“Theoretical risks and tabular asterisks…”), and, especially in comparison to physics, his paper about the paradox of theory testing in psychology and physics (“Theory-testing in Psychology and Physics…”).

      Not that I disagree, but can you quote the specific sections you are referring to? I don’t remember ever reading Meehl and considering whether “NHST is a clear operationalization of the falsifiability criterion”. Meehl was quite obviously in agreement with Lakatos from what I have read…

      • Oops, my sentence is so convoluted – I mean the exact opposite (agreeing with you): Meehl argues time and again that NHST IS NOT an operationalization of the falsifiability criterion. Well, at least not the NHST commonly used in “soft psychology”, as he usually says.

        • I meant that I don’t recall Meehl ever discussing whether or not NHST was an operationalization of the falsifiability criterion. I just don’t know what that means so was hoping for a quote.

    • I think you meant to say, Mattek *herself*…thanks!

      Thank you for your comments. The whole discussion on falsifiability and NHST was just a way of introducing the analogy. If it is distracting, I might just reframe the idea as identifying a potential way of mapping psychological variables to circuit variables, for the purposes of mapping out brain circuits and instantiating them in AI. I touch on this already in this draft of the paper, but the message doesn’t seem to be getting across very clearly. Thanks again.

  8. From the beginning of his “Two knights” paper:

    ‘The two knights are Sir Karl Raimund Popper (1959, 1962, 1972; Schilpp, 1974) and Sir Ronald Aylmer Fisher (1956, 1966, 1967), whose respective emphases on subjecting scientific theories to grave danger of refutation (that’s Sir Karl) and major reliance on tests of statistical significance (that’s Sir Ronald) are, at least in current practice, not well integrated—perhaps even incompatible.’

    The whole paper is about the poor way significance tests are used in Psychology because they are not risky enough. I will try to locate a better quote soon.

    • I was thinking of this paper, where Meehl (correctly) rejects falsifiability as an unattainable goal:

      Thus, although we intended to appraise only the main substantive theory T, what we have done is to falsify the conjunction; so all we can say “for sure” is that either the theory is false, or the auxiliary theory is false, or the instrumental auxiliary is false, or the ceteris paribus clause is false, or the particulars alleged by the experimenter are false.

      (1997) The problem is epistemology, not statistics: Replace significance tests by confidence intervals and quantify accuracy of risky numerical predictions. In L. L. Harlow, S. A. Mulaik, & J.H. Steiger (Eds.), What if there were no significance tests? (pp. 393-425). Mahwah, NJ: Erlbaum. meehl.umn.edu/sites/g/files/pua1696/f/169problemisepistemology.pdf

    • what has all this got to do
      with significance testing? Isn’t the social
      scientist’s use of the null hypothesis simply
      the application of Popperian (or Bayesian)
      thinking in contexts in which probability plays
      such a big role? No, it is not. One reason it is
      not is that the usual use of null hypothesis
      testing in soft psychology as a means of “cor-
      roborating” substantive theories does not sub-
      ject the theory to grave risk of refutation
      modus tollens, but only to a rather feeble
      danger

Leave a Reply to Daniel Lakeland Cancel reply

Your email address will not be published. Required fields are marked *