Chris Leong

Wiki Contributions

Comments

The Offense-Defense Balance Rarely Changes

It isn't clear that the offense-defense balance directly affects the number of deaths in a conflict in the way that you claim. For example, machine guns nests benefitted the defenders significantly, but could quite easily have resulted in there being more deaths in warfare, due to the use of tactics that hadn't yet accounted for them.

If you had told people in the 1970s that in 2020 terrorist groups and lone psychopaths could access more computing power than IBM had ever produced at the time from their pocket, what would they have predicted about the offense defense balance of cybersecurity?

I don't know why you'd think that compute would be the limiting factor here. Absent AI, there are limited ways in which to deploy more compute.

Neither EA nor e/acc is what we need to build the future

Most recent thing that pops into mind is Beff trying to spread the meme that EA is just a bunch of communists.

E/acc seems to do a good job of bringing people together in Twitter spaces.

Neither EA nor e/acc is what we need to build the future

Helen Toner was apparently willing to let OpenAI be destroyed because of a general feeling that the organization was moving too fast or commercializing too much.

I suspect that the board will look better over time as more information comes out.

Here's some quotes from the Time article where Sam was named CEO of the Year:

But four people who have worked with Altman over the years also say he could be slippery—and at times, misleading and deceptive. Two people familiar with the board’s proceedings say that Altman is skilled at manipulating people, and that he had repeatedly received feedback that he was sometimes dishonest in order to make people feel he agreed with them when he did not. These people saw this pattern as part of a broader attempt to consolidate power. “In a lot of ways, Sam is a really nice guy; he’s not an evil genius. It would be easier to tell this story if he was a terrible person,” says one of them. “He cares about the mission, he cares about other people, he cares about humanity. But there’s also a clear pattern, if you look at his behavior, of really seeking power in an extreme way.”

... Some worried that iterative deployment would accelerate a dangerous AI arms race, and that commercial concerns were clouding OpenAI’s safety priorities. Several people close to the company thought OpenAI was drifting away from its original mission. “We had multiple board conversations about it, and huge numbers of internal conversations,” Altman says. But the decision was made. In 2021, seven staffers who disagreed quit to start a rival lab called Anthropic, led by Dario Amodei, OpenAI’s top safety researcher. 

... For some time—little by little, at different rates—the three independent directors and Sutskever were becoming concerned about Altman’s behavior. Altman had a tendency to play different people off one another in order to get his desired outcome, say two people familiar with the board’s discussions. Both also say Altman tried to ensure information flowed through him. “He has a way of keeping the picture somewhat fragmented,” one says, making it hard to know where others stood.

... Altman told one board member that another believed Toner ought to be removed immediately, which was not true, according to two people familiar with the discussions.

In other words, it appears that Sam started the fight and not them. Is it really that crazy for the board to attempt to remove a CEO who was attempted to undermine the board's oversight over him?

They were definitely outclassed in terms of their political ability, but I don't think they were incompetent. It's more that when you go up against a much more skilled actor, they end up making you look incompetent.

Neither EA nor e/acc is what we need to build the future

Most of the principles espoused by EA (scientific mindset, openness to falsifying evidence, integrity, and teamwork) are shared by e/acc.

EA here.

Doesn't seem true as far as I can tell. E/acc doesn't want to expose it's beliefs to falsification; that's why it's almost always about attacking the other side and almost never about arguing for things on the object level.

E/acc doesn't care about integrity either. They're very happy to Tweet all kinds of weird conspiracy theories.

Anyway, I could be biased here, but that's how I see it.

Our Relationship With Knowledge

Great post. I really appreciated your comparison of the "more is better attitude" regarding knowledge with the "more is better attitude" regarding food.

Starting the Journey as CEO of the Roots of Progress

You might want to consider posting this as a top-level post as well.

PASTA and Progress: The great irony

Actually, I can imagine a world where physical brains operated by interacting with some unknown realm that provided some kind of computation capability that the brain lacked itself, although as neuroscience advances, there seems less and less scope for anything like this (not that I know very much about neuroscience at all). 

PASTA and Progress: The great irony

I don't identify as a materialist either (I'm still figuring out my views here), but the question of qualia seems orthogonal to the question of capabilities. A philosophical zombie has the same capability to act in the world as someone who isn't a zombie.

(I should add, this conversation has been useful to me as it's helped me understand why certain things I take for granted may not be obvious to other people).

PASTA and Progress: The great irony

What's your doubt?

Given enough computing power, we should be able to more or less simulate a brain. What is or was your worry? Ability to parallelise? Maybe that even though it may eventually become technically possible, it'll always be cost-prohibitive? Or maybe that small errors in the simulation would magnify over time?

PASTA and Progress: The great irony

The aspect as was arguing for as almost certain on the inside view is that we would be able to develop AGI eventually barring catastrophe. I wasn't extending that to "AGI will be here soon".

Regarding "AGI kill us or solve all our problems"; I think there are some possible scenarios where we end up with a totalitarian government or an oligarchy controlling AI or the AI keeps us alive for some reason (incl. s-risk scenarios) or being disempowered by AI/"going out with a whimper" as per What failure looks like. But I assign almost no weight on the internal view of AGI just not being that good. (What I mean by that, is I exclude the scenarios that are common in sci-fi where we have AGI and we still have humans doing most things and being better or as good, but not scenarios where humans do things b/c we don't trust the AI or b/c we need "fake jobs" for the humans to feel important).

Load More