Lizka Vaintrob writes:
I’m reaching out to share an announcement about a contest for critically engaging with work in effective altruism. The total prize amount is $100,000, and the deadline for submissions is September 1. You can see the announcement of the contest here.
We (the contest organizers) would like to get submissions from people who aren’t very involved in effective altruism, and we can’t do that by simply posting on the Effective Altruism Forum. I would love to get submissions from your readers, and I’d be really grateful if you shared the announcement link with them.
I don’t know anything about this but it seems like a good idea, so I’m spreading the word here.
P.S. A few months later, here are the results of the contest!
I am hoping they get some good criticisms. If not I fear they may wrongfully assume their ideas are especially good when it’s more likely the contest didn’t attract much attention. I agree with a lot of what EA says but the hyper-rationalist intelligence-worshipping culture within it is a big issue.
If you can articulate why it is an issue, you might get those $100,000.
+1. Write it up! I clicked through to the contest page, and I hope others do the same. This contest is very much up the ally of many of this blog’s participants!
I spent about half an hour on the site without figuring out what exactly this contest is about.
I don’t mean to seem dismissive but there is a type of verbose style of communication being used that wastes a lot of time.
Anyway, the answer is a return to testing your hypothesis instead of a strawman null hypothesis. But this has already been repeated by many people ad nauseum to no effect.
Anon:
Regarding your last sentence: it’s not “to no effect.” I agree it’s been frustrating, but I think the continuing beating of this dead horse has had some effect on many researchers.
Sure, it seems to have had very little effect. I will say that creating tools like Stan that make it easier to test a real theory may be the best approach.
Some will still use it to test default strawmen models but what can you do.
I’ll admit to the same experience. I read through all the material and have very little idea what it means – and I do feel like there are about 90% too many words (that don’t make it clearer). But to the extent that I can figure out what they are saying, I find it somewhat strange to be giving away potentially large sums of money to people who can critique what they currently do. Perhaps that is an admirable break from the usual confirmatory approach – but given their focus on effective altruism, I can’t help but wonder if there are better ways to spend the money.
This applies to any kind of ideology/philosophy, not just effective altruism, but if you believe your idea is so much superior for making the world a better place, you can justify spending huge amount of resources to spreading that idea as an worthwhile investment.
For example, you can spend $100k buying mosquito nets improving African GDP by $1M (according to EA calculations)
or
you can spend $100k convincing people to engage with/spread idea of effective altruism leading to people donating $200k to buy mosquito nets.
Personally, as is with most things related to effective altruism, I like their ideas in theory, but am more skeptical in practice.
Your comment confuses me. As I read the announcement, they are looking to fund work that is critical of effective altruism (at least critical of how they approach it). Your comment appears to be saying that they believe their ideas to be superior – if so, isn’t it unusual that they seek to fund people who may feel otherwise? (not to say it would be bad for them to do so – perhaps even enviable – but strange, nonetheless)
Yes, it reminded me of pre-notation algebra:
https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1949-8594.2000.tb17262.x
Which means essentially: (x+10)(x-10) = x^2 – 100
Of course, that is assuming those words were all really necessary to convey their precise meaning.
Dale:
The way I’m understanding it is that there are tens (millions?) of dollars donated to effective altruism (EA) causes. But there’s some doubt and debate within the community about if that money is actually being used most effectively. So this contest is a way to possibly find out if there’s something they’re missing which can make them even more effective altruists.
I think there’s also a secondary goal of spreading their ideas more widely. For instance, I’m sure there were people on this blog who were unfamiliar with EA but now are not. And if they want to win the contest they would have to learn a lot about EA. And considering that there are e.g. PhD students here who may later end up in high-ranking positions in places like the FDA or whatever, they likely think 100k has a reasonably high expected ROI.
M: something they’re missing which can make them even more effective altruists
Concise effective communication?
But does remind me of the Philosophy Journal that asked Andrew and me to submit a “statistician’s view” on amalgamation of evidence. The editor rejected our submission because we had not spent weeks reviewing the Philosophy Literature on similar topics. I reached out a second time a couple years later to the same Journal with different editors and they suggested I read a selection of recent papers. The first was so misguided and ill informed that I stopped.
Something about requiring the Red Team to study your world view seems the opposite of what likely will get you out of your group think. The secondary goal of spreading their ideas more widely likely interferes with the claimed primary goal.
The biggest issue with the effective altruism movement is that it’s particularly vulnerable to something known as “Pascal’s mugging”, especially with regards to things like AI risk.
The core problem is that you can funnel money towards fixing very unlikely “problems” by claiming that those problems would be catastrophic. Utility = probability_of_event * utility_of_event so if you pump utility_of_event to a really absurd value, you can make the utility seem arbitrary high or low.
This causes people to waste tons of money on things like AI risk that have a close to zero chance of happening, because the AI risk people claim that if it does happen, it would destroy the world.
It also reminds me of this:
https://link.springer.com/article/10.1007/s13280-019-01280-0
The bigger threat is hunger/malnutrition, so people use the mosquito nets to catch fish instead. But the holes in the nets are too small, so too many fish are caught before they can breed. Which may then make the hunger problem even worse in the future.
It seems the hunger problem should be addressed before the malaria problem.
Interesting article on using mosquito nets in artisanal fisheries. There would need to be more work on quantifying removals and areas impacted to try to put removals in context but is a great example of unintended consequences.
The better approach all around would be to start a gene-drive GMO mosquito eradication program to break the malaria cycle by removing the vector mosquito. Since there a hundreds of mosquito spp but only a few disease carrying ones, the local eradication of the disease carriers would not be missed (both from an ecological and health perspective).
GMO mosquitos will have their own unintended consequences. Eg, after DDT was phased out malaria rebounded to be worse than ever.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2620631/
Then there is the tetracycline aspect (the lethal gene is suppressed with that antibiotic).
In general, it seems people pay lip service to natural selection but do not really appreciate what it means.
Anoneuoid, that’s not an issue with effective altruism, that’s an issue with charity in general (and unexpected consequences).
You run into those sorts of issues with every possible charity program.
To be clear, the people involved in that subset of EA do not consider the x-risks they fund to be all that unlikely. They consider AI risk to be a realistically serious concern, with at least nontrivial odds of happening, and possibly in the next several decades as we start to amass brain-level computing power. Treating this as Pascal’s mugging isn’t really getting the argument right. There are detailed arguments for why we might expect increasingly powerful AI development to be dangerous by default.
It seems that the rowers/altruists want to pay someone to whip them into being better rowers/altruists.
There’s a kind of sad theme in the comments on this thread. I was trying to think of what to make of the whining about wordiness. I mean, they’re trying to solicit submissions from people outside their discipline, and it can be hard to communicate across disciplines.
But then I saw Keith’s comment about the Philosophy journal insisting that critics go down their rabbit hole and, presumably, make their arguments in Wonderland logic. Deeply discouraging stuff, but it’s also a common human reflex: Think of the struggling math student who wants to tell you all about their incorrect reasoning (a useful exercise, but only up to a point). I work in industry now, and I see dumb errors all the time, but the instances where people actually want to be corrected are a little hard to predict. Sometimes the legacy approach really is good enough for the purpose it’s being used for, and improving methodology really isn’t worth spending everyone’s bandwidth. It takes a lot of listening to to determine whether that’s the case (and even more listening to convince anyone outside your own head). So again, there’s value in hearing what legacy practitioners have to say for their methods (up to a point). It’s fun to imagine that we can simple drop our brilliant reasoning on people and they will pore through our text with rapt attention, but of course that’s not how people work. The best you can really every hope for is that people will meet you half way on this kind of engagement. The math student usually will, but mid-career professionals are more hit-and-miss.
I guess this is all just to say that identifying problems is relatively easy and not very valuable, whereas convincing people they can and should change their approaches is many orders of magnitude more difficult. Changing minds is also not a terribly rewarding business, from my experience. So are these EA guys serious? Who knows. I’m sure they are sincere in their belief that they want to understand how they could do better. But are they really prepared for the hard work of meeting half way on engaging with serious challenges to existing methods? Who knows…
(But I do know that we’re not doing our part to meet them half way when we whine about the solicitation being too wordy.)
I don’t think the complaint is just “it’s wordy” though. It’s loaded with jargon and is vague on some key points. Like others here, I have a PhD and am not completely unfamiliar with EA and I’m still not totally sure I understand what they want, even after reading it carefully. That’s not good writing no matter how you slice it.