Chris Leong

Posts

Sorted by New

Wiki Contributions

Comments

PASTA and Progress: The great irony

Actually, I can imagine a world where physical brains operated by interacting with some unknown realm that provided some kind of computation capability that the brain lacked itself, although as neuroscience advances, there seems less and less scope for anything like this (not that I know very much about neuroscience at all). 

PASTA and Progress: The great irony

I don't identify as a materialist either (I'm still figuring out my views here), but the question of qualia seems orthogonal to the question of capabilities. A philosophical zombie has the same capability to act in the world as someone who isn't a zombie.

(I should add, this conversation has been useful to me as it's helped me understand why certain things I take for granted may not be obvious to other people).

PASTA and Progress: The great irony

What's your doubt?

Given enough computing power, we should be able to more or less simulate a brain. What is or was your worry? Ability to parallelise? Maybe that even though it may eventually become technically possible, it'll always be cost-prohibitive? Or maybe that small errors in the simulation would magnify over time?

PASTA and Progress: The great irony

The aspect as was arguing for as almost certain on the inside view is that we would be able to develop AGI eventually barring catastrophe. I wasn't extending that to "AGI will be here soon".

Regarding "AGI kill us or solve all our problems"; I think there are some possible scenarios where we end up with a totalitarian government or an oligarchy controlling AI or the AI keeps us alive for some reason (incl. s-risk scenarios) or being disempowered by AI/"going out with a whimper" as per What failure looks like. But I assign almost no weight on the internal view of AGI just not being that good. (What I mean by that, is I exclude the scenarios that are common in sci-fi where we have AGI and we still have humans doing most things and being better or as good, but not scenarios where humans do things b/c we don't trust the AI or b/c we need "fake jobs" for the humans to feel important).

PASTA and Progress: The great irony

I'm curious, what's your main doubt about AGI happening eventually (excluding existential risks or scenarios where we end up back at the stone age)? The existence of humans, created by dumb evolution nonetheless, seems to constitute a strong evidence of physical possibility. And our ability to produce computer chips with astonishingly tiny components seems to suggest that we can actually do the physical manipulations required. So I think it's one of those things that sounds more speculative than it actually is.

I mean, I guess it's true that there is some doubt about AGI happening, but when you really get down to it, you can doubt anything. So I guess I'd be curious to have a better idea of what you mean by some doubt - maybe even a rough percent chance? I have a very low percent chance of AGI not happening (barring catastrophic risks as stated above) from within my model of the world, but I have a higher, but still low chance of my model being wrong.

PASTA and Progress: The great irony

Thanks for posting this! I would lean towards saying that it would be more tractable for Progress Studies to make progress on these issues than it might appear from first glance. One major advantage that progress studies has is that it is a big tent movement. Lots of people are affected by the unaffordability of housing and would love to see it cheaper, but very few people care enough about housing policy to show up to meetings about it every month. The topic just isn't that interesting to most people, myself included, and the conversations would probably get old fast. In contrast, Progress Studies promises to bundle enough ideas together that it has real growth potential.

What's the Right Policy Default for AI?

One thing to keep in mind is the potential for technologies to be hacked. I think widespread self-driving cars would be amazingly convenient, but also terrifying as companies allow them to be updated over the air. Even though the chance of a hacking attack at any particular instance of time is low, given a long enough time span and enough companies it's practically inevitable. When it comes to these kind of widescale risks, a precautionary approach seems viable, when it comes to smaller and more management risks a more proactionary approach makes sense.

Where is “Progress Studies” Going?

Things that are good are desireable would seem like a tauntology.

But my deeper critique is that whether a motto is a good choice or not depends on the context. And while in the past it may have made sense to abstract out progress as good, we’re now at that point where operating within that abstraction can lead us horribly astray.

Interview with me on Hear this Idea with Fin Moorhouse and Luca Righetti

I enjoyed this interview. I found it particularly interesting to hear how you were originally skeptical of the stagnation view and only came around to it later.

Where is “Progress Studies” Going?

Nuclear non-proliferation has slowed the distribution of nukes; I acknowledge that this is slowing distribution rather than development.

There are conventions against the use of or development of biological weapons. These don't appear to have been completely successful, but they've had some effect.

There has been a successful effort to prevent genetic enhancement - this may be net-positive or net-negative - but it shows the possibility of preventing development of a tech, even in China which was assumed to be the wild West.

But going further, progress studies wouldn't exist if we didn't think we could accelerate technologies. And as a matter if logic if we have the option to accelerate something we also have the option to not accelerate it, otherwise it was never an option. So even if we can't slow a harmful technology relative to a baseline, we can at least not accelerate it.

Load More