All of leo's Comments + Replies

How can we classify negative effects of new technologies?
Answer by leoMar 05, 20231

I wrote about Class 1 / Class 2 in the context of blockchain for my blog today and wanted to share my updated thoughts after spending a few days thinking.

I think fundamentally, Class 2 problems is just a rephrasing of tragedy of the commons issues. Rephrasing is useful because it gives us a new perspective to approach an issue.

In the piece, I suggest that we can predict Class 2 problems by thinking about the specific features of the technology, eg blockchain, which motivate entrepreneurs to solve the Class 1 problems, and thinking about how those features ... (read more)

Tell Good Stories

Excellent description of how stories play a critical role. I'm interested in whether the same sorts of stories could be updated and played again, or whether it has just become harder to share these kinds of stories. In the UK, in 1951, there was the Festival of Britain, which was similar to other events of the time: showing how the future could be great. It was at the newly built Southbank Centre. Such events require lots of public sector funding and, particularly to hold frequently, bi-partisan commitment. It seems like this is a prerequisite for national... (read more)

How can we classify negative effects of new technologies?
Answer by leoOct 02, 20221

Early adopter influence is one in some cases, I think when the tech plays a part in providing infrastructure, perhaps also elsewhere.

Kelly talks about crypto, and this is my motivating example here:

Today, though decentralised in name, most of the biggest organisations in crypto are controlled by tiny groups of people, typically single digits (for the orgs where voting takes place on the blockchain, you can verify this yourself).

Such concentration isn't really a problem when a space is small, like crypto is today (relatively: <1 million active users by f... (read more)

1Max Olson2yThat's a good point. I wouldn't say that "inequality" alone would be a risk category, but more specifically inequality that leads to future brittleness or fragility, as in your example. Basically in this case it's path dependant and certain starting conditions could lead to a worse outcome. This obviously could be the case for AI as well.