All of niplav's Comments + Replies

niplav's Shortform

—Evgeny Sedukhin - "Symphony of the sixth blast furnace" (1979)

Max Görlitz's Shortform

There's a long-ish exploration of this in Grand Futures ch. 1 (Details box 7), focusing on long-term projects in general. I'm eliding some footnotes and not linking the citations, for writing speed reasons:

Ongoing projects Numerous examples such as cities (Jericho 9600 BCE and onwards), mathematics and science, land reclamation, irrigation networks, canals, roads, cultivated landscapes, Japanese shrines rebuilt every few decades , etc. The Gunditjmara eel traps at Budj Bim have been maintained and modified for at least 6,700 years [1531]. Waqfs, charitab

... (read more)
Wizards and prophets of AI [draft for comment]

Merely pausing doesn't help if we aren't doing anything with that time.

True, I was insufficiently careful with my phrasing.

Wizards and prophets of AI [draft for comment]

Thanks for the great article :-)

I am commenting as someone who has spent a lot of time thinking about AI alignment, and considers themselves convinced that there is a medium probability (~65%) of doom. I hope this is not intrusive on this forum!

I hadn't considered the crux to be epistemic, which is an interesting and important point.

I would be interested in an attempt to quantify how slowly humanity should be moving with this: Is the best level comparable to the one with genetic engineering, or nuclear weapon proliferation? Should we pause until our interp... (read more)

2jasoncrawford2yThanks. Rather than asking how fast or slow we should move, I think it's more useful to ask what preventative measures we can take, and then estimate which ones are worth the cost/delay. Merely pausing doesn't help if we aren't doing anything with that time. On the other hand, it could be worth a long pause and/or a high cost if there is some preventive measure we can take that would add significant safety. I don't know offhand what would raise my p(doom), except for obvious things like smaller-scale misbehavior (financial fraud, a cyberattack) or dramatic technological acceleration from AI (genetic engineering, nanotech).
Wizards and prophets of AI [draft for comment]

Is this rationalists anthropomorphizing AI to behave/think like they thing, perhaps?

As someone who thinks AI doom is fairly likely (~65%), I reject this as psychologizing.

I think there is an argument for TAI x-risk which takes progress seriously. The transformative AI does not need to be omnipotent or all-knowing: it simply needs to be more advanced than the capability humanity can muster against it.

Consider the United States versus the world population from 1200: roughly the same size. But if you pitted those two actors against each other in a conflict... (read more)