It isn't clear that the offense-defense balance directly affects the number of deaths in a conflict in the way that you claim. For example, machine guns nests benefitted the defenders significantly, but could quite easily have resulted in there being more deaths in warfare, due to the use of tactics that hadn't yet accounted for them.
... (read more)If you had told people in the 1970s that in 2020 terrorist groups and lone psychopaths could access more computing power than IBM had ever produced at the time from their pocket, what would they have predicted about the offens
Most recent thing that pops into mind is Beff trying to spread the meme that EA is just a bunch of communists.
E/acc seems to do a good job of bringing people together in Twitter spaces.
Helen Toner was apparently willing to let OpenAI be destroyed because of a general feeling that the organization was moving too fast or commercializing too much.
I suspect that the board will look better over time as more information comes out.
Here's some quotes from the Time article where Sam was named CEO of the Year:
... (read more)But four people who have worked with Altman over the years also say he could be slippery—and at times, misleading and deceptive. Two people familiar with the board’s proceedings say that Altman is skilled at manipulating people, and that
Most of the principles espoused by EA (scientific mindset, openness to falsifying evidence, integrity, and teamwork) are shared by e/acc.
EA here.
Doesn't seem true as far as I can tell. E/acc doesn't want to expose it's beliefs to falsification; that's why it's almost always about attacking the other side and almost never about arguing for things on the object level.
E/acc doesn't care about integrity either. They're very happy to Tweet all kinds of weird conspiracy theories.
Anyway, I could be biased here, but that's how I see it.
Great post. I really appreciated your comparison of the "more is better attitude" regarding knowledge with the "more is better attitude" regarding food.
You might want to consider posting this as a top-level post as well.
Actually, I can imagine a world where physical brains operated by interacting with some unknown realm that provided some kind of computation capability that the brain lacked itself, although as neuroscience advances, there seems less and less scope for anything like this (not that I know very much about neuroscience at all).
I don't identify as a materialist either (I'm still figuring out my views here), but the question of qualia seems orthogonal to the question of capabilities. A philosophical zombie has the same capability to act in the world as someone who isn't a zombie.
(I should add, this conversation has been useful to me as it's helped me understand why certain things I take for granted may not be obvious to other people).
What's your doubt?
Given enough computing power, we should be able to more or less simulate a brain. What is or was your worry? Ability to parallelise? Maybe that even though it may eventually become technically possible, it'll always be cost-prohibitive? Or maybe that small errors in the simulation would magnify over time?
The aspect as was arguing for as almost certain on the inside view is that we would be able to develop AGI eventually barring catastrophe. I wasn't extending that to "AGI will be here soon".
Regarding "AGI kill us or solve all our problems"; I think there are some possible scenarios where we end up with a totalitarian government or an oligarchy controlling AI or the AI keeps us alive for some reason (incl. s-risk scenarios) or being disempowered by AI/"going out with a whimper" as per What failure looks like. But I assign almost no weight on the internal vi... (read more)
I'm curious, what's your main doubt about AGI happening eventually (excluding existential risks or scenarios where we end up back at the stone age)? The existence of humans, created by dumb evolution nonetheless, seems to constitute a strong evidence of physical possibility. And our ability to produce computer chips with astonishingly tiny components seems to suggest that we can actually do the physical manipulations required. So I think it's one of those things that sounds more speculative than it actually is.
I mean, I guess it's true that there is some d... (read more)
Thanks for posting this! I would lean towards saying that it would be more tractable for Progress Studies to make progress on these issues than it might appear from first glance. One major advantage that progress studies has is that it is a big tent movement. Lots of people are affected by the unaffordability of housing and would love to see it cheaper, but very few people care enough about housing policy to show up to meetings about it every month. The topic just isn't that interesting to most people, myself included, and the conversations would probably get old fast. In contrast, Progress Studies promises to bundle enough ideas together that it has real growth potential.
One thing to keep in mind is the potential for technologies to be hacked. I think widespread self-driving cars would be amazingly convenient, but also terrifying as companies allow them to be updated over the air. Even though the chance of a hacking attack at any particular instance of time is low, given a long enough time span and enough companies it's practically inevitable. When it comes to these kind of widescale risks, a precautionary approach seems viable, when it comes to smaller and more management risks a more proactionary approach makes sense.
Things that are good are desireable would seem like a tauntology.
But my deeper critique is that whether a motto is a good choice or not depends on the context. And while in the past it may have made sense to abstract out progress as good, we’re now at that point where operating within that abstraction can lead us horribly astray.
I enjoyed this interview. I found it particularly interesting to hear how you were originally skeptical of the stagnation view and only came around to it later.
Nuclear non-proliferation has slowed the distribution of nukes; I acknowledge that this is slowing distribution rather than development.
There are conventions against the use of or development of biological weapons. These don't appear to have been completely successful, but they've had some effect.
There has been a successful effort to prevent genetic enhancement - this may be net-positive or net-negative - but it shows the possibility of preventing development of a tech, even in China which was assumed to be the wild West.
But going further, progress studies... (read more)
One thing to keep in mind regarding measuring influence by numbers: Because EA started earlier, many EAs will be further into executing their plans. As an example, someone who is a student in 2020 at a top university, might be a senior manager by 2030.
This was triggered by news today in my home state of California, where a powerful legislator wants to spend $10 billion of our (temporary bumper) surplus subsidizing housing
For some reason, the media really doesn't want to spread the message "we need to build more housing". One theory is that many of the older journalists own property and don't want more construction in their neighborhoods. This doesn't seem like a very good explanation as then we might predict the younger journalists who don't own property would push to build more.
A second theory is that ... (read more)
"If you want to build a ship, don't drum up people to collect wood and don't assign them tasks and work, but rather teach them to long for the endless immensity of the sea" - Antoine de Saint Exupéry
For example, what could be done to make the AlexNet happen 10 years earlier?
I know it might be a heretical question on this forum, but do we really need to accelerate AI? Isn't there some point at which we can say "fast enough"? Like if we could press a button a make AGI appear today, would be wise to press that button? Are we truly ready for the consequences of what would arguably be the most important moment in our entire history? Aren't there enough other things in society that we could fix instead?
I expect that 2 is true as well and so it made sense to invent the bomb before another less responsible country, but if we could have waved a wand prevented the invention of nukes then I think it would have been worthwhile even if it cost us nuclear energy or slowed global progress.
I mean, a lot of people oppose progress for pretty silly and not really thought out reasons, but as far as reasons go, "We invented/almost invented something that could potentially have killed everyone on earth" seems like not a bad reason to slow things down for a bit and reflect.
Silicon Valley was originally highly suspicious of the business establishment with its focus on disrupting it, although this seems to have softened somewhat as it has formed its own establishment.
As an example, look at the 1984 Macintosh Commercial.
"The modern version is much more comfortable with technocracy" - I wasn't aware of that. I would love to see a source on this.
It was likely inevitable anyway
I'd suggest separating the question of whether a certain technology should have been developed from whether it was possible. For example, let's suppose someone is dying of cancer and we have no way of saving them.
Do we want to save them? Yes
Can we save them? No
I would be very disappointed if people ended up concluding from our inability to save them that we didn't actually want to save them anyway.
Similarly for nuclear weapons, the table may very well be:
Do we want to avoid them: Yes
Can we avoid them: No
Which is what I would ... (read more)
I suppose TED talks are the closest thing that exists to this. It seems that the popularity of TED seems to have peaked a while ago.
The development of the bomb may have been a pretty good period, since it led to nuclear energy and other innovations
I agree that we're probably ahead at this point, but, I don't know, seems like a pretty risky bet to take that it'll remain net-positive over the long-term. Like, sure it's nice nuclear power is an option, even if we don't make much use of it, and that we have isotopes for medical use, but that doesn't really feel worth having a nuclear apocalypse hanging over our heads?
Einstein said: "“I do not know with what weapons World War III will be fo... (read more)
A few thoughts:
Fascinating article. I'm surprised that I had never heard of the Bonfire of the Vanities and how it disrupted the Renaissance. I wonder how history would have turned out if it hadn't been disrupted.
I also found it interesting how those short disruptions were sufficient to end those society's golden ages, particularly since I would be tempted to argue that our own society has recently been suffering through such a disruption.
For the flip side of the coin, I would like to nominate the invention of the nuclear bomb as one of the most tragic moments in history.
I think there's likely to be a bit more tension between EA of today and Progress Studies vs. EA of the past.
The EA of the past was much more focused on global development (progress = good), whilst EA is currently undergoing a hard pivot towards long-termism, most notably bio-risk and ai-risk (progress = bad). Actually, the way I'd frame it is more about the importance of ensuring differential progress rather than progress in general. And I don't know how optimistic I am about Progress Studies heading that direction because thinking about progress itself is... (read more)
I've done something similar where I've asked for backstop funding for a project in case I wasn't able to get it funded elsewhere.
"Progress is real, desirable, and possible" is an inspiring slogan, but I would suggest that it's actually mistaken. What we want is differential progress where we accelerate those technologies most likely to be beneficial and slow those technologies most likely to be harmful.
The trade-off is that we can't solve old problems without creating new problems. In fact, new problems are one way to measure progress.
I think that there is some value in this frame, but I guess I see this as limited to the context where we're generally replacing bad problems with a less bad problems.
I guess it would seem a bit blase in a context where we take a problem that is only kind of bad and replace it with something that is a catastrophe.
So my tendency would be much more cautious about the potential to create new problems.
Will be quite curious to see if this conference is sufficient to revive this forum.