Skip to content
 

Crowdsourcing Data Analysis 2: Gender, Status, and Science

Emily Robinson writes:

Brian Nosek, Eric Luis Uhlmann, Amy Sommer, Kaisa Snellman, David Robinson, Raphael Silberzahn, and I have just launched a second crowdsourcing data analysis project following the success of the first one. In the crowdsourcing analytics approach, multiple independent analysts are recruited to test the same hypothesis on the same data set in whatever manner they see as best. If everyone comes up with the same results, then scientists can speak with one voice. If not, the subjectivity and conditionality of results on analysis strategy is made transparent.

The first crowdsourcing analytics initiative examined whether soccer referees give more red cards to dark skin toned than light-skin toned players (Silberzahn et al., in preparation; see project page on the Open Science Framework at https://osf.io/gvm2z/). The outcome was striking: although 62% of teams obtained a significant effect in the expected direction, estimated effect sizes ranged from moderately large to practically nil.

For this second project, we have collected the scientific dialogue from Edge.org to analyze the how gender and status affect verbal dominance and verbosity. This project adds several new key features to the first crowdsourcing project, in particular having analysts operationalize the key variables on their own and giving analysts the opportunity to propose and vote on their own hypothesis to be tested by the group.

The full project description is here. If you’re interested in being one of the crowdstormer analysts, you can register here. All analysts will receive an author credit on the final paper. We would love to have Bayesian analysts represented in the group. Also, please feel free to let others know about the opportunity; anyone with the relevant data analysis skills is welcome to take part.

Sounds like fun. And I’m pretty sure they won’t be following up in a few weeks with an announcement that this was all a hoax.

7 Comments

  1. Thom says:

    I think this is a good idea and look forward to reading it. However, I’m a bit worried by the oversimplistic argument: “If everyone comes up with the same results, then scientists can speak with one voice. If not, the subjectivity and conditionality of results on analysis strategy is made transparent.”

    What if someone put up data on voting and income and one analysis (using the average income in each state vs. the proportion of Democrat votes) showed that richer people are more likely to vote Democrat. Another analysis shows the opposite looking at voting within each state. The quality and appropriateness of the analysis matters. A minor issue is that you need to define “same”. (I get fed up of seeing two results showing the same pattern been labelled as inconsistent just because one is p = .08 and one is p =.03).

  2. Erikson says:

    As someone who contributed to the first project, I’m highly supportive of the initiative! It was my first project with Stan, and it worked well with the big dataset we had while Jags couldn’t even start (with my lousy skills, at least).

    Mayo’s and Thom’s concern about ‘same results = truth’ is justified, but the final article of the first project went the other way around. The main point in the final article, I think, is to show how the results were contingent on analysis decisions, assumptions and prior beliefs, like choice of covariates and likelihood function; a good example of the Garden of Forking Paths. But, even so, many teams found similar effect sizes and confidence intervals for the first question.

    • Thom says:

      That’s what I hoped is what was intended, but I think the statement I objected to was unhelpful. Consensus is good, but so it showing that existing data sets are inconclusive or that the story is more complicated than a simple analysis might suggest.

      • Keith O'Rourke says:

        Always had similar problems giving an explanation of meta-analysis that was inclusive of possibly discovering that the “evidence” from apparently similar studies was inconsistent/contradictory and or that there was not any evidence in any of the studies at all (thats really worth learning).

        For example, this phrase has survived on wiki for a couple years “methods for contrasting and combining results from different studies, in the hope of identifying patterns among study results, sources of disagreement among those results, or other interesting relationships that may come to light in the context of multiple studies” but distracting phrases like this always get added “The motivation of a meta-analysis is to aggregate information in order to achieve a higher statistical power for the measure of interest”. Its first to assess if there is evidence in the studies and credible replication of it. In terms of 3 questions, should another study be done, if so how, and if not what should be taken as the evidence and its adequate uncertainty?

        You want multiple perspectives about something possibly common, not common perspectives.

        Here 3 questions might be helpful, should another analysis be done, if so how, and if not what should be taken as the evidence and its adequate uncertainty?

  3. […] are currently working on a second effort in the same vein: to crowdsource the analysis of how gender and status impact scientific debate. You can trust the results will be interesting, and thanks to the crowd, more likely to be […]

  4. […] are currently working on a second effort in the same vein: to crowdsource the analysis of how gender and status impact scientific debate. You can trust the results will be interesting, and thanks to the crowd, more likely to be […]

Leave a Reply