Focus will be on the actual arguments in section on optimization pressure, since that seems to be the true objection here - previous sections seem to be rhetoric and background, mostly accepting the theoretical basis for the discussion.
I take it this essay presumes that the pure version of the argument is true - if you were so foolish as to tell a sufficiently capable AGI 'calculate as many digits of Pi as possible' with no mitigations in place, and it has the option to take over the world to do the calculation faster, it's going to do that.
However I inter... (read more)
I think it's an important crux of its own which level of such safety is necessary or sufficient to expect good outcomes. What is the default style of situation and use case? What can we reasonably hope to prevent happening at all? Do our 'trained professionals' actually know what they have to do, especially without being able to cheaply make mistakes and iterate, if they do have solutions available? Reality is often so much stupider than we expect.
Saying 'it is possible to use a superintelligent system safely' would, if true, be highly insufficient, unless... (read more)