1jasoncrawford2yI can only speak for myself.
I think that AI safety is a real issue. Many (most?) new technologies create
serious safety issues, and it's important to take them seriously so that we can
mitigate risk. I think this is mostly a job for the technologists and founders
who are actually developing and deploying the technology.
I think that “hard takeoff” scenarios are (almost by definition?) extremely
difficult to reason about, and thus necessarily involve a large degree of
speculation. I can't prove that it won't happen, but any such scenario seems
well outside our ability to predict or control.
A more likely AI global catastrophe scenario, to my mind, is: Over the coming
years or decades, we gradually deploy AI more and more as the control system for
every major part of the economy. AI traders dominate financial markets; AI
control systems run factories and power plants; all our vehicles are autonomous,
for both passengers and cargo; etc. And then at some point we hit an OOD edge
case that causes some kind of crash that ripples through the entire economy,
causing trillions of dollars worth of damage. A complex system failure that
makes the Great Depression look like a picnic.
In any case, I'm glad some smart people are thinking about AI safety up front
and working on it now.
How do you (and, separately, the Progress Studies community broadly) relate to hard takeoff risk from AI?