Skip to content

The neurostatistical precursors of noise-magnifying statistical procedures in infancy

David Allison points us to this paper, The neurodevelopmental precursors of altruistic behavior in infancy, by Tobias Grossmann, Manuela Missana, and Kathleen Krol, which states:

The tendency to engage in altruistic behavior varies between individuals and has been linked to differences in responding to fearful faces. The current study tests the hypothesis that this link exists from early in human ontogeny. Using eye tracking, we examined whether attentional responses to fear in others at 7 months of age predict altruistic behavior at 14 months of age. Our analysis revealed that altruistic behavior in toddlerhood was predicted by infants’ attention to fearful faces but not happy or angry faces. Specifically, infants who showed heightened initial attention to (i.e., prolonged first look) followed by greater disengagement (i.e., reduced attentional bias over 15 seconds) from fearful faces at 7 months displayed greater prosocial behavior at 14 months of age. Our data further show that infants’ attentional bias to fearful faces and their altruistic behavior was predicted by brain responses in the dorsolateral prefrontal cortex (dlPFC), measured through functional near-infrared spectroscopy (fNIRS). This suggests that, from early in ontogeny, variability in altruistic helping behavior is linked to our responsiveness to seeing others in distress and brain processes implicated in attentional control. These findings critically advance our understanding of the emergence of altruism in humans by identifying responsiveness to fear in others as an early precursor contributing to variability in prosocial behavior.

Allison writes:

From the paper, I discern that stepwise regression was used, but could not determine how many variables were used and whether any adjustment to the reported significance levels to accommodate the overfitting that is known to occur with stepwise selection was used. This raises questions when interpreting the results.

Also, they’re making the classic error of labeling differences as real if they’re statistically significant and zero if they’re not. That’s a standard statistical technique, but it’s a disaster; it’s a way to add noise to your study and get overconfidence.

I assume the authors of this paper were doing their best, but I’m very doubtful that they’ve offered real support for their claim that their findings “critically advance our understanding of the emergence of altruism in humans.” That’s a bit over the top, no?


  1. Kyle C says:

    I’ve recently been reading blog posts by a philosophy Ph.D. student who is urging her fellow philosophers of mind to “take on board” the rich trove of “discoveries” made by neuroscience in the past 20 years. Bah.

    • Keith O'Rourke says:

      Susan Haack has written some criticisms of taking on board “discoveries” as part of the replication crisis in philosophy.

    • Martha (Smith) says:

      My skepticism of neuroscience “results” is pretty large. First, I found that papers in the field often had a stock phrase like, “We used SAS Proc Mixed” without giving details (e.g., which factors were considered fixed and which were considered random). Then, more recently, I learned that fMRI analyses very often use just the magnitude of the fMRI output (which is in complex numbers) rather then the output itself; and, relatedly, that if another researcher asks for a researcher’s fMRI data to check results, they typically get a “preprocessed” version, which may be “preprocessed” by an idiosyncratic or ad hoc method that is not recorded.

      • Aaron G says:

        Martha, in your conversations with researchers involved with fMRI analyses, has an explanation ever been given regarding the need to “preprocess” the data, and how the data is “preprocessed”?

        BTW, I have a link to a paper by Lindquist (2008) regarding the statistical analysis of fMRI data.

        • Martha (Smith) says:

          The information about getting only “preprocessed” data came from a seminar speaker last week; my impression was that she found it frustrating that this (preprocessing with no documentation of how) was the typical situation, and that the practice interfered with being able to use real data to try to figure out best practices.

        • Martha (Smith) says:

          Thanks for the link to the Lindquist paper. From my perspective, some key points in the Lindquist paper are:

          “A full spatiotemporal model of the data is generally not considered feasible and a number of short cuts are taken throughout the course of the analysis.” (p. 440)

          “While all the preprocessing steps outlined above are essential for the standard model assumptions required for statistical analysis to hold, there needs to be a clear understanding of the effects they have on both the spatial and temporal correlation structure. More generally, it is necessary to study the interactions among the individual preprocessing steps.” (p.451)

          “There has been explosive interest in the use of fMRI in recent years. The rapid pace of development and the interdisciplinary nature of the neurosciences present an enormous challenge to researchers. Moving the field forward requires a collaborative team with expertise in psychology, neuroanatomy, neurophysiology, physics, biomedical engineering, statistics, signal processing and a variety of other disciplines depending on the research question. True interdisciplinary collaboration is exceedingly challenging, because team members must know enough about the other disciplines to be able to talk intelligently with experts in each field. Due to the importance that statistics plays in this research, it is important that more statisticians get involved in these research teams for the methodology to reach its full potential. ” (p. 461)

          PS: Information about the talk I refer to:

          PradoRaquel Prado (Department of Statistics at the University of California Santa Cruz)

          Title: Bayesian models for complex-valued fMRI

          Abstract: Detecting which voxels/regions are actived by an external stimulus is one of the main goals in the analysis of functional magnetic resonance imaging (fMRI) data. Voxel time series in fMRI are complex-valued signals consisting of magnitude and phase components, however, most studies discard the phase and only use the magnitude data. We present a Bayesian variable selection approach for detecting activation at the voxel level from complex-valued fMRI (f(c)MRI) recorded during task experiments. We show that this approach leads to fast and improved detection of activation when compared to alternative magnitude-only approaches. We discuss and illustrate modeling extensions that incorporate additional spatial structure via kernel convolution for more flexible analysis of f(c)MRI. The complex-valued spatial model encourages voxels to be activated in clusters, which is appropriate in applied settings, as the execution of complex cognitive tasks, and therefore brain activation, usually involve populations of neurons spanning across many voxels rather than isolated voxels. Finally, we present models that can handle multi-subject data and allow us to infer connectivity at the region-specific level in addition to voxel-specific activation. Model performance is evaluated through extensive and physically realistic simulation studies and in the analysis of human f(c)MRI.

      • Austin Fournier says:

        Also, maybe not their fault (getting people to sign up for an MRI is probably pretty hard), but neuroscience has like, the worst sample sizes imaginable a lot of the time. N = 8, for example.

Leave a Reply