Contest for critically engaging with effective altruism

Lizka Vaintrob writes:

I’m reaching out to share an announcement about a contest for critically engaging with work in effective altruism. The total prize amount is $100,000, and the deadline for submissions is September 1. You can see the announcement of the contest here.

We (the contest organizers) would like to get submissions from people who aren’t very involved in effective altruism, and we can’t do that by simply posting on the Effective Altruism Forum. I would love to get submissions from your readers, and I’d be really grateful if you shared the announcement link with them.

I don’t know anything about this but it seems like a good idea, so I’m spreading the word here.

P.S. A few months later, here are the results of the contest!

21 thoughts on “Contest for critically engaging with effective altruism

  1. I am hoping they get some good criticisms. If not I fear they may wrongfully assume their ideas are especially good when it’s more likely the contest didn’t attract much attention. I agree with a lot of what EA says but the hyper-rationalist intelligence-worshipping culture within it is a big issue.

    • +1. Write it up! I clicked through to the contest page, and I hope others do the same. This contest is very much up the ally of many of this blog’s participants!

  2. I spent about half an hour on the site without figuring out what exactly this contest is about.

    I don’t mean to seem dismissive but there is a type of verbose style of communication being used that wastes a lot of time.

    Anyway, the answer is a return to testing your hypothesis instead of a strawman null hypothesis. But this has already been repeated by many people ad nauseum to no effect.

    • Anon:

      Regarding your last sentence: it’s not “to no effect.” I agree it’s been frustrating, but I think the continuing beating of this dead horse has had some effect on many researchers.

      • Sure, it seems to have had very little effect. I will say that creating tools like Stan that make it easier to test a real theory may be the best approach.

        Some will still use it to test default strawmen models but what can you do.

    • I’ll admit to the same experience. I read through all the material and have very little idea what it means – and I do feel like there are about 90% too many words (that don’t make it clearer). But to the extent that I can figure out what they are saying, I find it somewhat strange to be giving away potentially large sums of money to people who can critique what they currently do. Perhaps that is an admirable break from the usual confirmatory approach – but given their focus on effective altruism, I can’t help but wonder if there are better ways to spend the money.

      • This applies to any kind of ideology/philosophy, not just effective altruism, but if you believe your idea is so much superior for making the world a better place, you can justify spending huge amount of resources to spreading that idea as an worthwhile investment.
        For example, you can spend $100k buying mosquito nets improving African GDP by $1M (according to EA calculations)
        or
        you can spend $100k convincing people to engage with/spread idea of effective altruism leading to people donating $200k to buy mosquito nets.
        Personally, as is with most things related to effective altruism, I like their ideas in theory, but am more skeptical in practice.

        • Your comment confuses me. As I read the announcement, they are looking to fund work that is critical of effective altruism (at least critical of how they approach it). Your comment appears to be saying that they believe their ideas to be superior – if so, isn’t it unusual that they seek to fund people who may feel otherwise? (not to say it would be bad for them to do so – perhaps even enviable – but strange, nonetheless)

      • Yes, it reminded me of pre-notation algebra:

        If the instance be, “ten and thing to be multiplied by thing less ten,” then this is the same as if it were said thing and ten by thing less ten. You say, therefore, thing multiplied by thing is a square positive; and ten by thing is ten things positive; and minus ten by thing is ten things negative. You now remove the positive by the negative, then there only remains a square. Minus ten multiplied by ten is a hundred, to be subtracted from the square. This, therefore, altogether, is a square less a hundred

        https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1949-8594.2000.tb17262.x

        Which means essentially: (x+10)(x-10) = x^2 – 100

        Of course, that is assuming those words were all really necessary to convey their precise meaning.

      • Dale:

        The way I’m understanding it is that there are tens (millions?) of dollars donated to effective altruism (EA) causes. But there’s some doubt and debate within the community about if that money is actually being used most effectively. So this contest is a way to possibly find out if there’s something they’re missing which can make them even more effective altruists.

        I think there’s also a secondary goal of spreading their ideas more widely. For instance, I’m sure there were people on this blog who were unfamiliar with EA but now are not. And if they want to win the contest they would have to learn a lot about EA. And considering that there are e.g. PhD students here who may later end up in high-ranking positions in places like the FDA or whatever, they likely think 100k has a reasonably high expected ROI.

        • M: something they’re missing which can make them even more effective altruists
          Concise effective communication?

          But does remind me of the Philosophy Journal that asked Andrew and me to submit a “statistician’s view” on amalgamation of evidence. The editor rejected our submission because we had not spent weeks reviewing the Philosophy Literature on similar topics. I reached out a second time a couple years later to the same Journal with different editors and they suggested I read a selection of recent papers. The first was so misguided and ill informed that I stopped.

          Something about requiring the Red Team to study your world view seems the opposite of what likely will get you out of your group think. The secondary goal of spreading their ideas more widely likely interferes with the claimed primary goal.

  3. The biggest issue with the effective altruism movement is that it’s particularly vulnerable to something known as “Pascal’s mugging”, especially with regards to things like AI risk.

    The core problem is that you can funnel money towards fixing very unlikely “problems” by claiming that those problems would be catastrophic. Utility = probability_of_event * utility_of_event so if you pump utility_of_event to a really absurd value, you can make the utility seem arbitrary high or low.

    This causes people to waste tons of money on things like AI risk that have a close to zero chance of happening, because the AI risk people claim that if it does happen, it would destroy the world.

    • It also reminds me of this:

      Malaria is a serious global health issue, with around 200 million cases per year. As such, great effort has been put into the mass distribution of bed nets as a means of prophylaxis within Africa. Distributed mosquito nets are intended to be used for malaria protection, yet increasing evidence suggests that fishing is a primary use for these nets, providing fresh concerns for already stressed coastal ecosystems. While research documents the scale of mosquito net fisheries globally, no quantitative analysis of their landings exists. The effects of these fisheries on the wider ecosystem assemblages have not previously been examined. In this study, we present the first detailed analysis of the sustainability of these fisheries by examining the diversity, age class, trophic structure and magnitude of biomass removal. Dragnet landings, one of two gear types in which mosquito nets can be utilised, were recorded across ten sites in northern Mozambique where the use of Mosquito nets for fishing is common. Our results indicate a substantial removal of juveniles from coastal seagrass meadows, many of which are commercially important in the region or play important ecological roles. We conclude that the use of mosquito nets for fishing may contribute to food insecurity, greater poverty and the loss of ecosystem functioning.

      https://link.springer.com/article/10.1007/s13280-019-01280-0

      The bigger threat is hunger/malnutrition, so people use the mosquito nets to catch fish instead. But the holes in the nets are too small, so too many fish are caught before they can breed. Which may then make the hunger problem even worse in the future.

      It seems the hunger problem should be addressed before the malaria problem.

      • Interesting article on using mosquito nets in artisanal fisheries. There would need to be more work on quantifying removals and areas impacted to try to put removals in context but is a great example of unintended consequences.
        The better approach all around would be to start a gene-drive GMO mosquito eradication program to break the malaria cycle by removing the vector mosquito. Since there a hundreds of mosquito spp but only a few disease carrying ones, the local eradication of the disease carriers would not be missed (both from an ecological and health perspective).

        • GMO mosquitos will have their own unintended consequences. Eg, after DDT was phased out malaria rebounded to be worse than ever.

          Across sub-Saharan Africa where the disease is holoendemic, most people are almost continuously infected by P. falciparum, and the majority of infected adults rarely experience overt disease. They go about their daily routines of school, work, and household chores feeling essentially healthy despite a population of parasites in their blood that would almost universally prove lethal to a malaria-naive visitor. This vigor in the face of infection is NAI to falciparum malaria. Adults have NAI, but infants and young children, at least occasionally, do not. NAI is compromised in pregnant women, especially primigravidae, and adults removed from their routine infections apparently lose NAI, at least temporarily. Interventions that reduce exposure below a level capable of maintaining NAI risk the possibility of catastrophic rebound, as occurred in the highlands of Madagascar in the 1980s, with epidemic malaria killing more than 40,000 people (259). Routine exposure to hyper- to holoendemic malaria protects a majority of individuals while killing a minority. Aggressive interventions that consider only that vulnerable minority risk compromising or eliminating the solid protection against severe malaria in the majority.

          https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2620631/

          Then there is the tetracycline aspect (the lethal gene is suppressed with that antibiotic).

          In general, it seems people pay lip service to natural selection but do not really appreciate what it means.

      • Anoneuoid, that’s not an issue with effective altruism, that’s an issue with charity in general (and unexpected consequences).

        You run into those sorts of issues with every possible charity program.

    • To be clear, the people involved in that subset of EA do not consider the x-risks they fund to be all that unlikely. They consider AI risk to be a realistically serious concern, with at least nontrivial odds of happening, and possibly in the next several decades as we start to amass brain-level computing power. Treating this as Pascal’s mugging isn’t really getting the argument right. There are detailed arguments for why we might expect increasingly powerful AI development to be dangerous by default.

  4. There’s a kind of sad theme in the comments on this thread. I was trying to think of what to make of the whining about wordiness. I mean, they’re trying to solicit submissions from people outside their discipline, and it can be hard to communicate across disciplines.

    But then I saw Keith’s comment about the Philosophy journal insisting that critics go down their rabbit hole and, presumably, make their arguments in Wonderland logic. Deeply discouraging stuff, but it’s also a common human reflex: Think of the struggling math student who wants to tell you all about their incorrect reasoning (a useful exercise, but only up to a point). I work in industry now, and I see dumb errors all the time, but the instances where people actually want to be corrected are a little hard to predict. Sometimes the legacy approach really is good enough for the purpose it’s being used for, and improving methodology really isn’t worth spending everyone’s bandwidth. It takes a lot of listening to to determine whether that’s the case (and even more listening to convince anyone outside your own head). So again, there’s value in hearing what legacy practitioners have to say for their methods (up to a point). It’s fun to imagine that we can simple drop our brilliant reasoning on people and they will pore through our text with rapt attention, but of course that’s not how people work. The best you can really every hope for is that people will meet you half way on this kind of engagement. The math student usually will, but mid-career professionals are more hit-and-miss.

    I guess this is all just to say that identifying problems is relatively easy and not very valuable, whereas convincing people they can and should change their approaches is many orders of magnitude more difficult. Changing minds is also not a terribly rewarding business, from my experience. So are these EA guys serious? Who knows. I’m sure they are sincere in their belief that they want to understand how they could do better. But are they really prepared for the hard work of meeting half way on engaging with serious challenges to existing methods? Who knows…

    (But I do know that we’re not doing our part to meet them half way when we whine about the solicitation being too wordy.)

    • I don’t think the complaint is just “it’s wordy” though. It’s loaded with jargon and is vague on some key points. Like others here, I have a PhD and am not completely unfamiliar with EA and I’m still not totally sure I understand what they want, even after reading it carefully. That’s not good writing no matter how you slice it.

Leave a Reply

Your email address will not be published. Required fields are marked *