When are complicated models helpful in psychology research and when are they overkill?

Nick Brown is bothered by this article, “An unscented Kalman filter approach to the estimation of nonlinear dynamical systems models,” by Sy-Miin Chow, Emilio Ferrer, and John Nesselroade. The introduction of the article cites a bunch of articles in serious psych/statistics journals. The question is, are such advanced statistical techniques really needed, or even legitimate, with the kind of very rough data that is usually available in psych applications? Or is it just fishing in the hope of discovering patterns that are not really there?

I wrote:

It seems like a pretty innocuous literature review. I agree that many of the applications are silly (for example, they cite the work of the notorious John Gottman in fitting a predator-prey model to spousal relations (!)), but overall they just seem to be presenting very standard ideas for the mathematical-psychology audience. It’s not clear whether advanced techniques are always appropriate here, but they come in through a natural progression: you start with simple models (linear regression, errors-in-variables regression), these simple models don’t quite fit, you make the models more complicated, the complicated models aren’t quite right either, etc. Ultimately you can get pretty big complicated models because the alternative is even worse. Whether this all makes sense is another question. Two areas in psychology where it does seem to make sense to use complicated models are: (1) personality types (we really are complicated multidimensional people) and (2) educational testing (where many different skills and abilities are tested at once).

Nick replied:

I wouldn’t have found the article objectionable, had it not been for the glowing write-up it received in a recent book chapter (Algoe, Fredrickson & Chow, in “Designing positive psychology: Taking stock and moving forward”), and this quote in particular:

First, be willing to leave the pack, think outside the box, all the while attending to the subtle yet recurrent patterns whispered by your data. Keep in mind that advances often represent bold and risky departures from current understanding. . . . Second, be open to capitalize on the rapid advances in measurement tools and mathematical and statistical models. Armed with these new . . . advances, while maintaining empirical and methodological rigor, emotion scientists working in positive psychology will be better equipped than ever before to find practical answers to age-old questions about what makes life good.

It seems to me that the authors are encouraging their readers to go on fishing expeditions, much like Bem (2000) (literally) did in a paragraph that Wagenmakers et al cited in a review of Bem’s appalling “psi is real” paper in 2011:

Examine [the data] from every angle. Analyze the sexes separately. Make up new composite indexes. If a datum suggests a new hypothesis, try to find further evidencefor it elsewhere in the data. If you see dim traces of interesting patterns, try to reorganize the data to bring them into bolder relief. If there are participants you don’t like, or trials, observers, or interviewers who gave you anomalous results,place them aside temporarily and see if any coherent patterns emerge. Go on afishing expedition for something–anything–interesting.

Chow et al’s use of some pre-existing empirical data to “validate” their model – the data coming from a previous study by Nesselroade – suggests to me that there is some cherrypicking going on. Without confirmatory analysis on a fresh data set, this is just so much circular reasoning.

To which I replied:

There’s a lot of misunderstanding here but I don’t think the paper you sent is particularly bad, it’s just part of a general attitude people have that there is a high-tech solution to any problem. This attitude is not limited to psychologists. For example, Bannerjee and Duflo are extremely well-respected economists but they have a very naive view (unfortunately, a view that is common among economists, especially among high-status economists, I believe, for whom its important to be connected with what they view as the most technically advanced statistics) of what is important in statistics. See the P.S. here.

As I wrote a couple years ago, the problem, I think, is that they (like many economists) think of statistical methods not as a tool for learning but as a tool for rigor. So they gravitate toward math-heavy methods based on testing, asymptotics, and abstract theories, rather than toward complex modeling. The result is a disconnect between statistical methods and applied goals.

For the psychologists you’re looking at, the problem is somewhat different: they do want to use statistics to learn, they’re just willing to learn things that aren’t true.

10 thoughts on “When are complicated models helpful in psychology research and when are they overkill?

  1. It seems useful to remind ourselves to work hard (when we can) on designs that will allow us to use the most simple statistical tools in the box. Many of the most compelling papers (at least in judgment and decision making) are the product of ingenuity on the front rather than the back end of data collection. It seems to me like capitalization on new mathematical techniques should come with a hint of dismay rather than strictly satisfaction.

  2. I think there’s an additional dynamic that you haven’t touched on, which is the need for Ph.D. candidates and junior faculty to find things to publish. “Application of Technique X to Problem Y” is a rich source for contributions that are both novel and pointless.

  3. There’s certainly a machismo aspect to modeling which is not helpful.

    However, from a practical standpoint, it’s often the “roughness” of the data that necessitates more complex models. For example, accounting for unevenly sampled heterogeneous subpopulations in the data may require quite a few parameters, which then necessitates regularized approaches over vanilla MLE approaches. Data may come from some underlying dynamic process rather than samples from a fixed distribution. Modeling the dynamics of the process can also be used to regularize a noisy trajectory. I haven’t read the paper, but Kalman filters are often useful for this kind of thing.

    Accounting for these sources of “roughness” in the underlying process through a more complex model is what helps to data interpretable in a simple way. By contrast, off-the-shelf independence tests computed from sampling distributions that have nothing to do with the underlying process can offload the complexity from well-defined complexity in the model to ill-defined complexity in the interpretation. The problem is that the contribution of the system dynamics or sampling process and the causal effect of interest is not separated in the correlation estimate.

  4. I’m not quite sure what to make of this discussion. It seems a bit dismissive and assumption laden, beginning with Brown’s premise that psychology doesn’t have data requiring advanced methods, and ending with an apparent assertion that quantitative psychologists are learning about things that are “untrue”. Andrew, didn’t you have a saying that when you run into a new statistical problem you go and check if it’s previously been worked out by psychometricians? I took that comment as a tip of the hat to the long history of fine statistical work accomplished in quantitative psychology over the last 150 years.

    Psychologists do have abundant, dense data they need to analyze. From psycho- and neurophysiological data (EKG, fMRI, skin conductance), to dense EMA data (addiction, mood, social interaction), to continuous behavior monitors (eye tracking, keyboard logging, health indicators, worker productivity), to dense daily diaries of self-reports, to extended longitudinal data on dyads and systems (parent/child; teacher-students; families; therapist-client), to complex genetic designs, psychology is more challenged by having data too complex for standard methods, rather than data too sparse for complex methods.

    Add to this the growing realization that models of change and development often need to be articulated at the individual level if we want to avoid misinterpreting between person covariation as reflective of within person change, and you see why complex dynamic models are appearing more and more in the psychological literature. Sure, there’s plenty of excess and hype and abuse of fancy methods in psychology. But please don’t be too dismissive of the field….there are many serious minds at work in quant psych and I happen to consider the authors of the paper under discussion to be among them.

  5. “Examine [the data] from every angle. Analyze the sexes separately.”

    I’d be interested in a list of examples that you’d consider likely to be just a fishing expedition and what would be reasonable ways to break down data on human subjects?

    At the Fishing Expedition pole, I would put Day of the Week Born as being highly likely to be completely random. (But, now that I think about it, if you were doing a study trying to estimate the impact of being born by Caesarean section on life outcomes, the fact that fewer C-sections are scheduled for Saturday, Sunday, and Wednesday might be useful. But, most of the time, it’s just a way to search for a fluke statistically significant result.)

    At the Quite Possibly Relevant pole, I would put sex and age, followed by the other big Identity Politics categories such as race, ethnicity, sexual orientation, class, religion, native or immigrant, and so forth.

      • Okay, so why not, when planning the analysis, make sex and race and some similar Identity Politics categories pre hoc subgroups? We have good Bayesian reasons for assuming ahead of time that sex or race just might turn out to matter.

  6. Pingback: When are complicated models helpful in psychology research and … | Neuro Physiology Blog

  7. Pingback: Links 1/28/13 | Mike the Mad Biologist

Comments are closed.