Mmore from Ppnas

Kevin Lewis asks for my take on two new papers:

Study 1:
Honesty plays a key role in social and economic interactions and is crucial for societal functioning. However, breaches of honesty are pervasive and cause significant societal and economic problems that can affect entire nations. Despite its importance, remarkably little is known about the neurobiological mechanisms supporting honest behavior. We demonstrate that honesty can be increased in humans with transcranial direct current stimulation (tDCS) over the right dorsolateral prefrontal cortex. Participants (n = 145) completed a die-rolling task where they could misreport their outcomes to increase their earnings, thereby pitting honest behavior against personal financial gain. Cheating was substantial in a control condition but decreased dramatically when neural excitability was enhanced with tDCS. This increase in honesty could not be explained by changes in material self-interest or moral beliefs and was dissociated from participants’ impulsivity, willingness to take risks, and mood. A follow-up experiment (n = 156) showed that tDCS only reduced cheating when dishonest behavior benefited the participants themselves rather than another person, suggesting that the stimulated neural process specifically resolves conflicts between honesty and material self-interest. Our results demonstrate that honesty can be strengthened by noninvasive interventions and concur with theories proposing that the human brain has evolved mechanisms dedicated to control complex social behaviors.

Study 2:
Academic credentials open up a wealth of opportunities. However, many people drop out of educational programs, such as community college and online courses. Prior research found that a brief self-regulation strategy can improve self-discipline and academic outcomes. Could this strategy support learners at large scale? Mental contrasting with implementation intentions (MCII) involves writing about positive outcomes associated with a goal, the obstacles to achieving it, and concrete if-then plans to overcome them. The strategy was developed in Western countries (United States, Germany) and appeals to individualist tendencies, which may reduce its efficacy in collectivist cultures such as India or China. We tested this hypothesis in two randomized controlled experiments in online courses (n = 17,963). Learners in individualist cultures were 32% (first experiment) and 15% (second experiment) more likely to complete the course following the MCII intervention than a control activity. In contrast, learners in collectivist cultures were unaffected by MCII. Natural language processing of written responses revealed that MCII was effective when a learner’s primary obstacle was predictable and surmountable, such as everyday work or family obligations but not a practical constraint (e.g., Internet access) or a lack of time. By revealing heterogeneity in MCII’s effectiveness, this research advances theory on self-regulation and illuminates how even highly efficacious interventions may be culturally bounded in their effects.

He only sent me the abstract which is kind of a nice thing to do cos then I feel under no obligation to read the papers (which he tells me will appear in PPNAS and are embargoed until this very moment).

Anyway, here was my reply:

#1 looks like a forking-paths disaster but, hey, who knows? I guess it’s a candidate for a preregistered replication study.

#2 looks interesting as a main effect—if a simple trick helps people focus, that’s good—but I’m suspicious of the interaction for the usual reasons of confounders and forking paths.

5 thoughts on “Mmore from Ppnas

  1. This arouses my skepticism:

    “Learners in individualist cultures were 32% (first experiment) and 15% (second experiment) more likely to complete the course following the MCII intervention than a control activity. In contrast, learners in collectivist cultures were unaffected by MCII.”

    First of all, I don’t see how this intervention is relevant only to individualist cultures. Don’t people in any culture need to set goals and overcome obstacles? The interpretation “learners in collectivist cultures were unaffected by MCII” suggests to me that the intervention in many cases failed to produce an effect. The researchers seem to have assumed, rather than demonstrated, that the collectivism accounts for the difference.

    Also, the effect for the first experiment seems strangely large. I wonder what was actually involved in completing the online course. Did students need to pass the course in order to “complete” it, or did they just have to go through all the lessons and turn in assignments? What’s even stranger here is the authors’ subsequent breakdown of these results:

    “Natural language processing of written responses revealed that MCII was effective when a learner’s primary obstacle was predictable and surmountable, such as everyday work or family obligations but not a practical constraint (e.g., Internet access) or a lack of time.”

    So if you looked at the subgroup of participants with “predictable and surmountable” primary obstacles, would you find a huge effect? Did the intervention have a negative effect for any subgroups?

    Of course I’d have to see the study to answer any of these questions. But the abstract gives me a sense of what to look for. First of all, I’d want to see what the online course was all about and what was involved in completing it. Second, I’d like to see the details of the two experiments. Third, I’d like to see different breakdowns of the data and the authors’ rationale for breaking it down in particular ways (individualist vs. collectivist cultures, “predictable/surmountable” vs. other kinds of obstacles).

    • +1

      The sample size is very large here, so if they’re claiming no effect for collectivist cultures, the observed difference between control and treatment must really be tiny. It is difficult to believe that a single intervention would have a precisely estimated effect of 32% change in US and Germany, but a precisely estimated 0% change in China and India.

        • Thanks for the link. These effect sizes don’t seem so crazy anymore – I was thinking 32% referred to completion rates themselves, but it refers to applying the % change formula to the difference in completion rates for control vs. treatment, which is roughly 5.5% to 7.25%.

          And to give credit to the authors, there are no p-values in the neighborhood of 0.03. There are signs of flexibility in the analysis, but the main results are supported by p-values that would be difficult to achieve through creative noise mining alone.

Leave a Reply

Your email address will not be published. Required fields are marked *