There's a long-ish exploration of this in Grand Futures ch. 1 (Details box 7), focusing on long-term projects in general. I'm eliding some footnotes and not linking the citations, for writing speed reasons:
... (read more)Ongoing projects Numerous examples such as cities (Jericho 9600 BCE and onwards), mathematics and science, land reclamation, irrigation networks, canals, roads, cultivated landscapes, Japanese shrines rebuilt every few decades , etc. The Gunditjmara eel traps at Budj Bim have been maintained and modified for at least 6,700 years [1531]. Waqfs, charitab
Merely pausing doesn't help if we aren't doing anything with that time.
True, I was insufficiently careful with my phrasing.
Thanks for the great article :-)
I am commenting as someone who has spent a lot of time thinking about AI alignment, and considers themselves convinced that there is a medium probability (~65%) of doom. I hope this is not intrusive on this forum!
I hadn't considered the crux to be epistemic, which is an interesting and important point.
I would be interested in an attempt to quantify how slowly humanity should be moving with this: Is the best level comparable to the one with genetic engineering, or nuclear weapon proliferation? Should we pause until our interp... (read more)
Is this rationalists anthropomorphizing AI to behave/think like they thing, perhaps?
As someone who thinks AI doom is fairly likely (~65%), I reject this as psychologizing.
I think there is an argument for TAI x-risk which takes progress seriously. The transformative AI does not need to be omnipotent or all-knowing: it simply needs to be more advanced than the capability humanity can muster against it.
Consider the United States versus the world population from 1200: roughly the same size. But if you pitted those two actors against each other in a conflict... (read more)
—Evgeny Sedukhin - "Symphony of the sixth blast furnace" (1979)