Jonas Kgomo

Posts

Sorted by New

Wiki Contributions

Comments

Draft for comment: Towards a philosophy of safety

I am curious how we can use the theory of AI Safety (although it is still in its formative stages) to address safety and progress. Progress safety seems very important, it could be approached from policy level or exploratory engineering. Perhaps there is an overlap between AI safety and Progress safety. Overall, very intriguing  idea to think about , looking forward to the full comment. 

It seems to be related with moral uncertainty,  that is, when trying to understand whether to pursue a technology that has a possibility of existential risk but has positive externalities that outweigh the hazards, do you choose to pursue it or not.