Over the last few years, effective altruism has gone through a rise-and-fall story arc worthy of any dramatic tragedy.

The pandemic made them look prescient for warning about global catastrophic risks, including biosafety. A masterful book launch put them on the cover of TIME. But then the arc reversed. The trouble started with FTX, whose founder Sam Bankman-Fried claimed to be acting on EA principles and had begun to fund major EA efforts; its collapse tarnished the community by association with fraud. It was bad for EA if SBF was false in his beliefs; it was worse if he was sincere. Now we’ve just watched a major governance battle over OpenAI that seems to have been driven by concerns about AI safety of exactly the kind long promoted by EA.

SBF was willing to make repeated double-or-nothing wagers until FTX exploded; Helen Toner was apparently willing to let OpenAI be destroyed because of a general feeling that the organization was moving too fast or commercializing too much. Between the two of them, a philosophy that aims to prevent catastrophic risk in the future seems to be creating its own catastrophes in the present. Even Jaan Tallinn is “now questioning the merits of running companies based on the philosophy.”

On top of that, there is just the general sense of doom. All forms of altruism gravitate towards a focus on negatives. EA’s priorities are the relief of suffering and the prevention of disaster. While the community sees the potential of, and earnestly hopes for, a glorious abundant technological future, it is mostly focused not on what we can build but on what might go wrong. The overriding concern is literally the risk of extinction for the human race. Frankly, it’s exhausting.

So I totally understand why there has been a backlash. At some point, I gather, someone said, hey, we don’t want effective altruism, we want “effective accelerationism”—abbreviated “e/acc” (since of course we can’t just call it “EA”). This meme has been frequent in my social feeds lately.

I call it a meme and not a philosophy because… well, as far as I can tell, there isn’t much more to it than memes and vibes. And hey, I love the vibe! It is bold and ambitious. It is terrapunk. It is a vision of a glorious abundant technological future. It is about growth and progress. It is a vibe for the builder, the creator, the discoverer, the inventor.

But… it also makes me worried. Because to build the glorious abundant technological future, we’re going to need more than vibes. We’re going to need ideas. A framework. A philosophy. And we’re going to need just a bit of nuance.

We’re going to need a philosophy because there are hard questions to answer: about risk, about safety, about governance. We need good answers to those questions in part because mainstream culture is so steeped in fears about technology that the world will never accept a cavalier approach. But more importantly, we need good answers because one of the best features of the glorious abundant technological future is not dying, and humanity not being subject to random catastrophes, either natural or of our own making. In other words, safety is a part of progress, not something opposed to it. Safety is an achievement, something actively created through a combination of engineering excellence and sound governance. Our approach can’t just be blind, complacent optimism: “pedal to the metal” or “damn the torpedos, full speed ahead.” It needs to be one of solutionism: “problems are real but we can solve them.”

You will not find a bigger proponent of science, technology, industry, growth, and progress than me. But I am here to tell you that we can’t yolo our way into it. We need a serious approach, led by serious people.

The good news is that the intellectual and technological leaders of this movement are already here. If you are looking for serious defenders and promoters of progress, we have Eli Dourado in policy, Bret Kugelmass or Casey Handmer in energy, Ben Reinhardt investing in nanotechnology, Raiany Romanni advocating for longevity, and many many more, including the rest of the Roots of Progress fellows.

I urge anyone who values progress to take the epistemic high road. Let’s make the best possible case for progress that we can, based on the deepest research, the most thorough reasoning, and the most intellectually honest consideration of counterarguments. Let’s put forth an unassailable argument based on evidence and logic. The glorious abundant technological future is waiting. Let’s muster the best within ourselves—the best of our courage and the best of our rationality—and go build it.


Followup thoughts based on feedback

  1. Many people focused on the criticism of EA in the intro, but this essay is not a case against EA or against x-risk concerns. I only gestured at EA criticism in order to acknowledge the motivation for a backlash against it. This is really about e/acc. (My actual criticism of EA is longer and more nuanced and I have not yet written it up)
  2. Some people suggested that my reading of the OpenAI situation is wrong. That is quite possible. It is my best reading based on the evidence I've seen, but there are other interpretations and outsiders don't really know. If so, it doesn't change my points about e/acc.
  3. The quote from the Semafor article may not accurately represent Jaan Tallinn's views. A more careful reading suggests that Tallinn was criticizing self-governance schemes, rather than criticizing EA as a philosophy underlying governance.

Thanks all.

6

15 comments, sorted by Click to highlight new comments since: Today at 8:10 AM
New Comment

Helen Toner was apparently willing to let OpenAI be destroyed because of a general feeling that the organization was moving too fast or commercializing too much.

I suspect that the board will look better over time as more information comes out.

Here's some quotes from the Time article where Sam was named CEO of the Year:

But four people who have worked with Altman over the years also say he could be slippery—and at times, misleading and deceptive. Two people familiar with the board’s proceedings say that Altman is skilled at manipulating people, and that he had repeatedly received feedback that he was sometimes dishonest in order to make people feel he agreed with them when he did not. These people saw this pattern as part of a broader attempt to consolidate power. “In a lot of ways, Sam is a really nice guy; he’s not an evil genius. It would be easier to tell this story if he was a terrible person,” says one of them. “He cares about the mission, he cares about other people, he cares about humanity. But there’s also a clear pattern, if you look at his behavior, of really seeking power in an extreme way.”

... Some worried that iterative deployment would accelerate a dangerous AI arms race, and that commercial concerns were clouding OpenAI’s safety priorities. Several people close to the company thought OpenAI was drifting away from its original mission. “We had multiple board conversations about it, and huge numbers of internal conversations,” Altman says. But the decision was made. In 2021, seven staffers who disagreed quit to start a rival lab called Anthropic, led by Dario Amodei, OpenAI’s top safety researcher. 

... For some time—little by little, at different rates—the three independent directors and Sutskever were becoming concerned about Altman’s behavior. Altman had a tendency to play different people off one another in order to get his desired outcome, say two people familiar with the board’s discussions. Both also say Altman tried to ensure information flowed through him. “He has a way of keeping the picture somewhat fragmented,” one says, making it hard to know where others stood.

... Altman told one board member that another believed Toner ought to be removed immediately, which was not true, according to two people familiar with the discussions.

In other words, it appears that Sam started the fight and not them. Is it really that crazy for the board to attempt to remove a CEO who was attempted to undermine the board's oversight over him?

They were definitely outclassed in terms of their political ability, but I don't think they were incompetent. It's more that when you go up against a much more skilled actor, they end up making you look incompetent.

There is an argument to be made that e/acc is the Jungian shadow to EA. 

There is a fundamental difference in principles between the two movements in that EA gradually and then suddenly fell into a paternalistic disregard (if not disdain) for the negative feedback that the market provides -- e.g., Helen Toner's belief that the dissolution of OpenAI was an acceptable alternative to resolving differences with the CEO. But with this exception, most of the principles espoused by EA (scientific mindset, openness to falsifying evidence, integrity, and teamwork) are shared by e/acc.

But EA started with philosophical principles and became a mass movement. e/acc more or less has begun as a mass movement, and is only gradually and haltingly identifying its principles.

Both EA and e/acc reflexively repress the valid differences they have in their approach to promoting progress. While e/acc is now on the ascendant and EA on the ropes, until e/acc or EA can integrate their shadow, both will fall short of their potential in activating human energy in service of progress. 

What would a fully integrated vision of progress look like? It would acknowledge the valid view of e/acc that markets generally provide the best mechanism for gathering and processing information about the needs of dispersed groups of individuals while at the same time acknowledging and grappling with the reality that there are some important needs that cannot be met by markets (either because the preconditions for market formation have not been or cannot be met).

But I would be very careful posting this sort of essay online right now. You are either for or against at the moment. Anybody trying to nuance things is likely to be sidelined.

Most of the principles espoused by EA (scientific mindset, openness to falsifying evidence, integrity, and teamwork) are shared by e/acc.

EA here.

Doesn't seem true as far as I can tell. E/acc doesn't want to expose it's beliefs to falsification; that's why it's almost always about attacking the other side and almost never about arguing for things on the object level.

E/acc doesn't care about integrity either. They're very happy to Tweet all kinds of weird conspiracy theories.

Anyway, I could be biased here, but that's how I see it.

I can understand why you say what you say about falsification. The way the e/acc community is operating right now is more crusade than critical. But I haven't seen the evidence for lack of integrity that you appear to have seen. Not saying it's not there; just I haven't seen it.

I wouldn't write off the people behind e/acc just yet, however. In the end, the scientific mindset may win out over the short term desire to score points and dunk on a competing vision that has been embarrassed in various ways.

If there were any part of e/acc that you might find worth incorporating into EA, what might it be?

Most recent thing that pops into mind is Beff trying to spread the meme that EA is just a bunch of communists.

E/acc seems to do a good job of bringing people together in Twitter spaces.

Hadn't seen that. Too bad he's misrepresenting facts.

But that hints at what might be worth reevaluating in EA. Jung had this notion of individuation, in which we have to incorporate into our personality conflicting aspects of ourselves in order to fully realize our capabilities. EA seems very academic or analytical in its approach to promoting progress whereas e/acc is more political or emotional. I believe it will take both to realize a future in which progress is accelerated in a way that benefits even the most vulnerable members of society.

The thing about e/acc is it's a mix of the reasonable and the insane doom cult. 

The reasonable parts talk about AI curing diseases ect, and ask to speed it up. 

Given some chance of AI curing diseases, and some chance of AI caused extinction, it's a tradeoff. 

Now where the optimal point of the tradeoff lands on depend on whether we just care about existing humans, or all potential future humans. And also on how big we think the risk of AI extinction is.

If we care about all future humans, and think ai is really dangerous, we get a "proceed with extreme caution" position. A position that accepts the building of ASI eventually, but is quite keen to delay it 1000 years if that buys any more safety. 

On the other end, some people think the risks are small, and mostly care about themselves/current humans. They are more e/acc.

But there are also various "AI will be our worthy successors", "AI will replace humans, and that's great" type e/acc who are ok with the end of humanity. 

I don't see any specific criticism of effective altruism other than "I don't like the vibes".

And the criticism from "acrimonious corporate politics". 

"Helen Toner was apparently willing to let OpenAI be destroyed because of a general feeling that the organization was moving too fast or commercializing too much."

Between the two of them, a philosophy that aims to prevent catastrophic risk in the future seems to be creating its own catastrophes in the present.

Shutting down a company and some acrimonious board room discussion is hardly "catastrophic". And it can be the right move, if you think the danger exceeds the value the company is creating. 

Ie if a company makes nuclear power plants that are melt downs just waiting to happen, or kids toys full of lead or something, shutting that company down is a good move. 

Hi Jason,

This is great, I would love to read more about how you believe Progress Studies could become a philosophy on par with Effective Altruism. I think an advantage EA has is its roots in John Stuart Mill and some of his contemporaries. Personally, I've found it harder to pinpoint which philosophers were early proponents of Progress Studies - my sense is that the idea of building, whatever the trials and tribulations, is fundamentally a Stoic idea. Indeed,  I think Ayn Rand's ideas, particularly on the importance of individualism, are important if one would like to create an epistemic history of Progress Studies. 

Thanks for sharing this draft.

Thanks Robert. I think progress studies needs a more well-defined value system. I have gestured at “humanism” as the basis for this, but it needs much more.

I agree that Rand's ideas are important here, particularly her view of creative/productive work as a noble activity and of scientists, inventors and business leaders as heroic figures.

Style suggestion. You could put the penultimate paragraph before the preceding one and delete the final paragraph. That will decrease the preachiness factor at the end and the repetition of ideas in the last and third to last paragraph. Plus going straight from we need serious people to the paragraph about those people is what your structure is asking for.

Thanks, good point about the flow here.

The takes here are suspiciously similar to Vitalik's techno-optimism d/acc post https://vitalik.eth.limo/general/2023/11/27/techno_optimism.html#compatible Wondering if there are any thoughts on this?

I liked Vitalik's post and generally agree

A potential area of overlap between effective altruism and Roots of Progress is the non-profit New Harvest, which funds research into making meat, eggs, and milk without animals.