As Adam Becker shows in his book, EAs started out being reasonable āgive to charity as much as you can, and research which charities do the most goodā but have gotten into absurdities like āit is more important to fund rockets than help starving people or prevent malaria because maybe an asteroid will hit the Earth, killing everyone, starving or notā.
I havenāt read Beckerās book and probably wonāt spend the time to do so. But if this is an accurate summary, itās a bad sign for that book, because plenty of them were bonkers all along.
As journalists and scholars scramble to account for this ānewā version of EAāwhat happened to the bednets, and why are Effective Altruists (EAs) so obsessed with AI?āthey inadvertently repeat an oversimplified and revisionist history of the EA movement. It goes something like this: EA was once lauded as a movement of frugal do-gooders donating all their extra money to buy anti-malarial bednets for the poor in sub-Saharan Africa; but now, a few EAs have taken their utilitarian logic to an extreme level, and focus on ālongtermismā, the idea that if we wish to do the most good, our efforts ought to focus on making sure the long-term future goes well; this occurred in tandem with a dramatic influx of funding from tech scions of Silicon Valley, redirecting EA into new cause areas like the development of safe artificial intelligence (āAI-safetyā and āAI-alignmentā) and biosecurity/pandemic preparedness, couched as part of a broader mission to reduce existential risks (āx-risksā) and āglobal catastrophic risksā that threaten humanityās future. This view characterizes ālongtermismā as a ārecent outgrowthā (Ongweso Jr., 2022) or even breakaway āsectā (Aleem, 2022) that does not represent authentic EA (see, e.g., Hossenfelder, 2022; Lenman, 2022; Pinker, 2022; Singer & Wong, 2019). EAās shift from anti-malarial bednets and deworming pills to AI-safety/x-risk is portrayed as mission-drift, given wings by funding and endorsements from Silicon Valley billionaires like Elon Musk and Sam Bankman-Fried (see, e.g., Bajekal, 2022; Fisher, 2022; Lewis-Kraus, 2022; Matthews, 2022; Visram, 2022). A crucial turning point in this evolution, the story goes, includes EAs encountering the ideas of transhumanist philosopher Nick Bostrom of Oxford Universityās Future of Humanity Institute (FHI), whose arguments for reducing x-risks from AI and biotechnology (Bostrom, 2002, 2003, 2013) have come to dominate EA thinking (see, e.g., Naughton, 2022; Ziatchik, 2022).
This version of events gives the impression that EAās concerns about x-risk, AI, and ālongtermismā emerged out of EAās rigorous approach to evaluating how to do good, and has only recently been embraced by the movementās leaders. MacAskillās publicity campaign for WWOTF certainly reinforces this perception. Yet, from the formal inception of EA in 2012 (and earlier) the key figures and intellectual architects of the EA movement were intensely focused on promoting the suite of causes that now fly under the banner of ālongtermismā, particularly AI-safety, x-risk/global catastrophic risk reduction, and other components of the transhumanist agenda such as human enhancement, mind uploading, space colonization, prediction and forecasting markets, and life extension biotechnologies.
To give just a few examples: Toby Ord, the co-founder of GWWC and CEA, was actively collaborating with Bostrom by 2004 (Bostrom & Ord, 2004),18 and was a researcher at Bostromās Future of Humanity Institute (FHI) in 2007 (Future of Humanity Institute, 2007) when he came up with the idea for GWWC; in fact, Bostrom helped create GWWCās first logo (EffectiveAltruism.org, 2016). Jason Matheny, whom Ord credits with introducing him to global public health metrics as a means for comparing charity effectiveness (Matthews, 2022), was also working to promote Bostromās x-risk agenda (Matheny, 2006, 2009), already framing it as the most cost-effective way to save lives through donations in 2006 (User: Gaverick [Jason Gaverick Matheny], 2006). MacAskill approvingly included x-risk as a cause area when discussing his organizations on Felificia and LessWrong (Crouch [MacAskill], 2010, 2012a, 2012b, 2012c, 2012e), and x-risk and transhumanism were part of 80Kās mission from the start (User: LadyMorgana, 2011). Pablo Stafforini, one of the key intellectual architects of EA ābehind-the-scenesā, initially on Felificia (Stafforini, 2012a, 2012b, 2012c) and later as MacAskillās research assistant at CEA for Doing Good Better and other projects (see organizational chart in Centre for Effective Altruism, 2017a; see the section entitled āghostwritingā in Knutsson, 2019), was deeply involved in Bostromās transhumanist project in the early 2000s, and founded the Argentine chapter of Bostromās World Transhumanist Association in 2003 (Transhumanismo. org, 2003, 2004). Rob Wiblin, who was CEAās executive director from 2013-2015 prior to moving to his current role at 80K, blogged about Bostrom and Yudkowksyās x-risk/AI-safety project and other transhumanist themes starting in 2009 (Wiblin, 2009a, 2009b, 2010a, 2010b, 2010c, 2010d, 2012). In 2007, Carl Shulman (one of the most influential thought-leaders of EA, who oversees a $5,000,000 discretionary fund at CEA) articulated an agenda that is virtually identical to EAās ālongtermistā agenda today in a Felificia post (Shulman, 2007). Nick Beckstead, who co-founded and led the first US chapter of GWWC in 2010, was also simultaneously engaging with Bostromās x-risk concept (Beckstead, 2010). By 2011, Becksteadās PhD work was centered on Bostromās x-risk project: he entered an extract from the work-in-progress, entitled āGlobal Priority Setting and Existential Risk: Crucial Ethical Considerationsā (Beckstead, 2011b) to FHIās āCrucial Considerationsā writing contest (Future of Humanity Institute, 2011), where it was the winning submission (Future of Humanity institute, 2012). His final dissertation, entitled On the Overwhelming Importance of Shaping the Far Future (Beckstead, 2013) is now treated as a foundational ālongtermistā text by EAs.
Throughout this period, however, EA was presented to the general public as an effort to end global poverty through effective giving, inspired by Peter Singer. Even as Beckstead was busy writing about x-risk and the long-term future in his own work, in the media he presented himself as focused on ending global poverty by donating to charities serving the distant poor (Beckstead & Lee, 2011; Chapman, 2011; MSNBC, 2010). MacAskill, too, presented himself as doggedly committed to ending global povertyā¦
(Beckerās previous book, about the interpretation of quantum mechanics, irritated me. It recapitulated earlier pop-science books while introducing historical and technical errors, like getting the basic description of the EPR thought-experiment wrong, and butchering the biography of Grete Hermann while acting self-righteous about sexist men overlooking her accomplishments. See previous rant.)
That Carl Shulman post from 2007 is hilarious.
The ātwo articles belowā are by Yudkowsky.
User āgaverickā replies,
Shulmanās response begins,
Ray mothersodding Kurzweil!