Epistemic status: low

Crossposted on High Modernism

A foremost goal of the Progress community is to accelerate progress. Part of that involves researching the inputs of progress; another part involves advocating for policies that promote progress.  A few of the common policy proposals include:

  • Fixing the housing supply problem
  • Improving and increasing research and development spending
  • Increasing immigration
  • Repealing excessive regulations, particularly in the energy sector

All of these would be very good and I support them. At the same time, any attempt to increase growth runs against a number of headwinds:

  • The US and other Western governments appear to be deeply sclerotic, leading to regulatory bloat and perhaps making change difficult
  • Population growth is collapsing in the US, due to both fewer births and less immigration. Under most growth models, people are the key source of new ideas.
  • Good ideas are (likely) getting harder to find. Growth on the frontier may simply get harder as we pick “low hanging fruit,” though obviously this is often debated.

The US has grown at, on average, 2.7% since the Reagan administration. The last 10 years have been more disappointing, less than 2%. What could a successful Progress movement be able to accomplish? Raising the rate to 2.5%? To 4%?

I should emphasize that I admire all of the policy and research currently being done by advocates of progress. But usually we approach Progress from the frame of the Great Stagnation: We used to grow quickly, then something happened around 1971, and now we grow slowly. But I wonder if we should also be considering different world views of where we stand in relation to the future.

I’m particularly interested in the view that we’re living in the Most Important Century. In this view, we are nearing a breakthrough that could overcome the headwinds of population decline and the ever more difficult search for new ideas: knowledge production via automation. 

Holden Karnofsky calls this AI system PASTA: Process for Automating Scientific and Technological Advancement. If PASTA or something similar were created, we might enter a period of increasing growth that would quickly usher in a radically different future. 

It sounds a bit far-fetched, but there’s no hasn’t been a devastating argument made against it. Science sounds like something that would be hard to automate, but AI already isn’t progressing as we expected; rather than slowly working its way up from low skilled to high skilled labor, as was often anticipated, AI seems to be on a crash course with creative progressions like writing (GPT systems) and now illustration (DALL-E). Machine learning is all about training by trial and error without precise instruction. And as impressive as current models are, they aren’t even 1% as big as human brains. But that will quickly change as computing power becomes cheaper (More on AI and bioachnors here).

Plus, when have friends of progress been adverse to sci-fi-sounding futures?

If this seems compelling, Karnofsky’s post on PASTA (and the rest of the Most Important Century series) discusses these scenarios in much more detail. 

So should we just build PASTA and reep the rewards of Progress? No–more likely we should be extremely worried. There are serious risk from misaligned artificial intelligence, which could pose a threat to human civilization, and there are possibly also risks from humans colonizing the galaxy without sufficient ethical reflection on how to do that responsibly. 

So we’re caught in a funny place: a lot of proximate growth goals look good but not world changing. And the “big bet” may be a suicide mission. I’m not sure what to make of all of this. The implication might simply be to work in AI alignment and policy. I think at a bare minimum it’s worth us being more curious about these discussions. 

There’s a big irony here: As pessimistic as EAs are about AI trajectories, they see the possibility of, in Karnofsky’s words, “a stable, galaxy wide civilization.” Wouldn’t it be silly if we were working on NSF spending when the takeoff began?

5

14 comments, sorted by Click to highlight new comments since: Today at 12:44 PM
New Comment

Yes, I've been thinking about this often.

I do think it's important to work on AI safety. I would like to learn more about it. I have been following the debate on this to at least some extent.

If we can make safe AI, then I think it has enormous potential, possibly even at the PASTA level. There's a paper from Robin Hanson where he models the log history of economic growth as a series of three exponential modes (very roughly, “hunting,” “farming,” and “industry”) and speculates that if there is a fourth mode, we are due for it soon—and that it could create growth levels ~2 orders of magnitude greater than we've seen in the Industrial Age. PASTA is certainly a candidate for such a fourth mode.

But I think that progress studies is still relevant, even in such a world—perhaps far more relevant. If we retain control over the future, then we will need to make choices that shape the future, and progress studies should guide us.

And of course, there are so many unknown unknowns here that we can't be sure that AGI will happen, or on what timescale. So it's also important to keep working other angles (nanotech, longevity, etc.) For that reason, although I understand the sentiment, I don't think it would literally be “silly if we were working on NSF spending when the takeoff began.” We should be working on many things at once.

I'm curious, what's your main doubt about AGI happening eventually (excluding existential risks or scenarios where we end up back at the stone age)? The existence of humans, created by dumb evolution nonetheless, seems to constitute a strong evidence of physical possibility. And our ability to produce computer chips with astonishingly tiny components seems to suggest that we can actually do the physical manipulations required. So I think it's one of those things that sounds more speculative than it actually is.

I mean, I guess it's true that there is some doubt about AGI happening, but when you really get down to it, you can doubt anything. So I guess I'd be curious to have a better idea of what you mean by some doubt - maybe even a rough percent chance? I have a very low percent chance of AGI not happening (barring catastrophic risks as stated above) from within my model of the world, but I have a higher, but still low chance of my model being wrong.

I don't think it is fair to act like Jason is doubting something so knockdown clear. Yes, to you and I AGI seems obviously possible and within this century seems even seems likely, but Jason said he doesn't know much about the AI stuff. And his default view is agnosticism, not deference to the LW community. Don't forget that not everyone has spent the past decade reading about AGI! ;)

I have read enough (e.g., Holden Karnofsky's essays) to understand the case for it. It is a compelling case. What I'm arguing against is a line of thinking like: “AGI will be here soon and it will either kill us or solve all our problems, so there's no point in working on curing cancer, longevity, nanotech, fusion, or progress studies.” There are just too many unknown unknowns.

On top of which I would add that machine intelligence, however it evolves, is something very different from human intelligence, just as a washing machine is different from a housekeeper and a submarine is different from a whale. Machines “think” in the way that a submarine “swims.” So there are limits on how much we can extrapolate from human intelligence.

Re the first point, I agree. I would tentatively suggest doing something like OpenPhil's worldview diversification, where research, labor, and capital are divided among a few distinct futures scenarios and each is optimized independently. My point in the piece is that I think the current program is a bit under-diversified. 

What “current program” are you referring to exactly? (The progress studies community? The world? Or what?)

The aspect as was arguing for as almost certain on the inside view is that we would be able to develop AGI eventually barring catastrophe. I wasn't extending that to "AGI will be here soon".

Regarding "AGI kill us or solve all our problems"; I think there are some possible scenarios where we end up with a totalitarian government or an oligarchy controlling AI or the AI keeps us alive for some reason (incl. s-risk scenarios) or being disempowered by AI/"going out with a whimper" as per What failure looks like. But I assign almost no weight on the internal view of AGI just not being that good. (What I mean by that, is I exclude the scenarios that are common in sci-fi where we have AGI and we still have humans doing most things and being better or as good, but not scenarios where humans do things b/c we don't trust the AI or b/c we need "fake jobs" for the humans to feel important).

Re “we would be able to develop AGI eventually” as “almost certain”: At least up until a year ago I would have said no, definitely not certain, because a computer is very different from a brain and we don't know yet what it can do. However, as AI advances, I put more probability on it.

What's your doubt?

Given enough computing power, we should be able to more or less simulate a brain. What is or was your worry? Ability to parallelise? Maybe that even though it may eventually become technically possible, it'll always be cost-prohibitive? Or maybe that small errors in the simulation would magnify over time?

Well, I'm not a materialist, so it's not obvious to me that we can successfully simulate a brain, in the ways that matter, on purely material hardware. We just really don't understand consciousness or how it arises at all. That to my mind is a huge unknown.

I don't identify as a materialist either (I'm still figuring out my views here), but the question of qualia seems orthogonal to the question of capabilities. A philosophical zombie has the same capability to act in the world as someone who isn't a zombie.

(I should add, this conversation has been useful to me as it's helped me understand why certain things I take for granted may not be obvious to other people).

Well, I'm also not sure if p-zombies can exist!

(Although if an AI passed the Turing Test I would be more likely to think it is a p-zombie than to think that it is conscious.)

Actually, I can imagine a world where physical brains operated by interacting with some unknown realm that provided some kind of computation capability that the brain lacked itself, although as neuroscience advances, there seems less and less scope for anything like this (not that I know very much about neuroscience at all). 

Thanks for posting this! I would lean towards saying that it would be more tractable for Progress Studies to make progress on these issues than it might appear from first glance. One major advantage that progress studies has is that it is a big tent movement. Lots of people are affected by the unaffordability of housing and would love to see it cheaper, but very few people care enough about housing policy to show up to meetings about it every month. The topic just isn't that interesting to most people, myself included, and the conversations would probably get old fast. In contrast, Progress Studies promises to bundle enough ideas together that it has real growth potential.