Here's a draft chapter from my forthcoming book I'm finishing on artificial intelligence governance. It specifically addresses calls by Nick Bostrom for global surveillance of AI/robotics research and development. And I consider what the history of nuclear and chemical control efforts teaches us about AI arms control efforts. 17 pages long. I'd welcome input. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4174399
Some key takeaways:
- Precautionary restraints are most justifiable when the harms are highly probably, tangible, immediate, irreversible, catastrophic, or directly threatening to life and limb in some fashion. But some critics and policymakers blow things out of proportion by misdefining existential risk, or they fail to appreciate how predicting the course of technological developments is severely challenged by knowledge and resource constraints.
- It is often the case that the most important solution to technological risk is more technological innovation to overcome those problems. The greatest existential risk of all would be to block further technological innovation and scientific progress.
- Proposals to impose global regulatory control of AI through some sort of global regulatory authority are both unwise and extremely unlikely to work.
- As with nuclear and chemical weapons in the past, treaties, accords, sanctions, and other multilateral agreements can help address some threats of malicious uses of AI or robotics. Bilateral or unilateral actions may be necessary in certain limited instances when national security threats are clearer and more immediate. But trade-offs are inevitable and addressing one type of existential risk sometimes can give rise to another, including war.
- Calls for global bans are largely futile because many nations will never agree to them. No major global power is going to preemptively tie its hands by agreeing to not develop offensive AI-oriented military capabilities.
- Soft law will continue to play a role. Many different non-governmental international bodies and multinational actors can play an important role as coordinators of national policies and conveners of ongoing deliberation about various AI risks and concerns.