Riffing off of Jason's recent post on progress safety . . .

I was trying to classify unintended negative consequences of new technologies, with the idea that this would help predict what they might be for current and future innovations. With a better understanding of potential downsides we might have better chance at solving or addressing them in advance. 

Kevin Kelly wrote a good post putting the effects into 2 classes: "Class 1 problems are due to it not working perfectly. Class 2 problems are due to it working perfectly."

What are the categories one level down? Here are the ones I could come up with but I'd love to hear other thoughts:

  • Large scaling effects -- no issue at small scale usage but at large scale bad
  • Harmful waste product
  • Over reliance on use (makes users fragile) -- probably true of most tech
  • Unsustainable inputs
  • Maladaptive health -- adds or removes something our bodies aren’t used to and can’t adapt to over 1 generation
  • Increases mimetic rivalry
  • Destroys existing stable social systems
New Answer
Ask Related Question
New Comment

4 Answers

The existence of most of the ones you listed sounds questionable.  

How about economic risk exposure (for a given person/city/state)? I think there is already a ton of research on this.

E.g. funding some new nuclear power research could provide a 10000x ROI but .0000X% danger of destroying the city/area of the research facility.

I wrote about Class 1 / Class 2 in the context of blockchain for my blog today and wanted to share my updated thoughts after spending a few days thinking.

I think fundamentally, Class 2 problems is just a rephrasing of tragedy of the commons issues. Rephrasing is useful because it gives us a new perspective to approach an issue.

In the piece, I suggest that we can predict Class 2 problems by thinking about the specific features of the technology, eg blockchain, which motivate entrepreneurs to solve the Class 1 problems, and thinking about how those features could be bad when overdone (classic market failure ideas of over supply)

Rather than coming up with a checklist of things to look out for, which we might never complete, I think using the lens of 'what persuades entrepreneurs to solve the Class 1 problems, and how could this be bad' gives a useful way to approach Class 2 safety topics. It also lets us make the argument that 'these Class 2 problems are only here because the technology was so good that we fixed all the Class 1 problems, so let's face them head on, rather than banning the technology (or similar)'

My blog post here

Early adopter influence is one in some cases, I think when the tech plays a part in providing infrastructure, perhaps also elsewhere.

Kelly talks about crypto, and this is my motivating example here:

Today, though decentralised in name, most of the biggest organisations in crypto are controlled by tiny groups of people, typically single digits (for the orgs where voting takes place on the blockchain, you can verify this yourself).

Such concentration isn't really a problem when a space is small, like crypto is today (relatively: <1 million active users by far; etc). But it becomes significant when the tech becomes widely adopted. For example, Ethereum, the most important blockchain right now, is de facto (could argue de jure) controlled by Vitalik Buterin. If trillions of dollars of industries are moved onto Ethereum (like, perhaps the $14tr securities mkt), then that becomes problematic (specially if people less socially-minded then Vitalik are influential!)

New infrastructure technology creates new elites from the people who were there first. I suppose trains are a historic example. That concentration of power only comes when blockchain works really well, but it can be problematic

I suspect the problem comes when these new elites attempt to reframe society. This necessarily causes instability and can block better improvements, even if this reframing is an improvement on the status quo

That's a good point.

I wouldn't say that "inequality" alone would be a risk category, but more specifically inequality that leads to future brittleness or fragility, as in your example. 

Basically in this case it's path dependant and certain starting conditions could lead to a worse outcome. This obviously could be the case for AI as well.

As I was writing that post, I was thinking in the back of my mind about this distinction:

  • Operational safety: safety from things that are already happening, where we can learn from experience, iterate on solutions, and improve safety metrics over time
  • Development safety: safety from new technology that hasn't been developed yet, where we try to mitigate the harms ahead of time, by theoretical models of risk/harm, or by early small-scale testing ahead of deployment, etc.