Donald Hobson

Posts

Sorted by New

Wiki Contributions

Comments

Don’t be scared by the AI fearmongering.

>Throughout history, fearmongering has been used to justify a lot of extreme measures.

And throughout history, people have dismissed real risks and been caught with their pants down. What, in 2018 or Feb 2020 would appear to be pretty extreme measures at pandemic prevention would make total sense from our point of view.

 

Countries can and do spend a huge pile of money to defend themselves from various things. Including huge militaries to defend themselves from invasion etc. 

 

All sorts of technologies come with various safety measures. 

 

>For a more concrete example, one could draw the possibility that video games might cause the players to emulate behaviors, even though you have to be insane to believe the video games are real, to then start advocating for bans of violent video games. However, one could go a step further and say that building games could also make people believe that it’s easy to build things, leading to people building unsafe houses, and what about farming games, or movies, or books?

 

If you are unable to distinguish the arguments for AI risk from this kind of rubbish, that suggests either you are unable to evaluate argument plausibility, or you are reading a bunch of strawman arguments for AI risk.

>The community wants you to believe in a very pessimistic version of the world where all the alignment ideas don’t work, and AI may suddenly be dangerous at any time even when their behaviors look good and they’re constantly reward for their good behaviors?

I do not know of any specific existing alignment protocol that I am convinced will work. 

And again, if the reward button is pressed every time the AI does nice things, there is no selection pressure one way or the other between an AI that wants nice things, and one that wants to press the reward button. The way these "rewards" in ML work is similar to selection pressure in evolution. And humans were selected on to enjoy sex so they produced more babies, and then invented contraception. And this problem has been observed in toy AI problems too. 

 

This isn't to say that there is no solution. Just that we haven't yet found a solution.

>The AI alignment difficulty lies somewhere on a spectrum, yet they insist to base the policy on the idea that AI alignment lies somewhere in a narrow band of spectrum that somehow the pessimistic ideas are true, yet we can somehow align the AI anyway, instead of just accepting that humanity’s second best alternative to survival is to build something that will survive and thrive, even if we won’t?

We know alignment isn't super easy, because we haven't succeeded yet. We don't really know how hard it is. 

Maybe it's hopelessly hard. But if your giving up on humanity before you spend 10% of GDP on the problem, your doing something very wrong. 

Think of a world where aliens invaded, and the government kind of took a few pot shots at them with a machine gun, and then gave up. After all, the aliens will survive and thrive even if we don't. And mass mobilization, shifting to a wartime economy... those are extreme measures.

The Offense-Defense Balance Rarely Changes

Well 

>Invader countries have to defend their conquests and hackers need to have strong information security.

One place where offense went way ahead of defense is with nukes. 

However nukes are sufficiently hard to make that only a few big powers have them. Hence balance of power MAD. 

If destruction is easy enough, someone will do it.

In the war example, as weapon lethality went up, the fighters moved further apart. So long as both sides have similar weapons and tactics, there exists some range at which you aren't so close as to be instakilled, nor are you so far as to have no hope of attacking. This balance doesn't apply to civilian casualties. 

Report on the Desirability of Science Given Risks from New Biotech

The thing is, we have many options that aren't just accelerating or decelerating the whole thing. Like we can choose gain of function research and cutting edge AI capabilities, and accelerate everything except that. 

Science is lots of different pieces, differential technological development.

 

 "25% probability that the domain experts are right x 50% chance that it’s not too late for science to affect the onset of the
time of perils x 50% chance that science cannot accelerate us to safety = 6.25%"

This smells of the "multistage fallacy"

You think of something. List a long list of "nesscessary steps". Estimate middling probabilities for each step. And multiply them together for a small end stage probability. 

The problem is, often some of the steps, or all of them, turn out to not be that necessary. And often, if a step had actually happened, it would do so in a way that gave you strong new information about the likelihood of other steps. 

Ie if a new device needs 100 new components to be invented, and you naively assume the probability is 50/50 for each component. But then a massive load of R&D money gets sent towards making the device, and all 100 components are made. 

In this particular case, you are assuming a 25% chance the domain experts are right about the level of X-risk. In the remaining 75%, apparently X risk is negligable. There is no possibility for "actually it's way way worse than the domain experts predicted".

"x 50% chance that it’s not too late for science to affect the onset of the
time of perils x 50% chance that science cannot accelerate us to safety "

If the form of the peril is a step. Say a single moment when the first ASI is turned on, then "accelerate to safety" is meaningless. You can't make the process less risky by rushing through the risky period faster. You can't make Russian roulette safer by playing it real fast, thus only being at risk for a short time. 

Neither EA nor e/acc is what we need to build the future

The thing about e/acc is it's a mix of the reasonable and the insane doom cult. 

The reasonable parts talk about AI curing diseases ect, and ask to speed it up. 

Given some chance of AI curing diseases, and some chance of AI caused extinction, it's a tradeoff. 

Now where the optimal point of the tradeoff lands on depend on whether we just care about existing humans, or all potential future humans. And also on how big we think the risk of AI extinction is.

If we care about all future humans, and think ai is really dangerous, we get a "proceed with extreme caution" position. A position that accepts the building of ASI eventually, but is quite keen to delay it 1000 years if that buys any more safety. 

On the other end, some people think the risks are small, and mostly care about themselves/current humans. They are more e/acc.

But there are also various "AI will be our worthy successors", "AI will replace humans, and that's great" type e/acc who are ok with the end of humanity. 

Neither EA nor e/acc is what we need to build the future

I don't see any specific criticism of effective altruism other than "I don't like the vibes".

And the criticism from "acrimonious corporate politics". 

"Helen Toner was apparently willing to let OpenAI be destroyed because of a general feeling that the organization was moving too fast or commercializing too much."

Between the two of them, a philosophy that aims to prevent catastrophic risk in the future seems to be creating its own catastrophes in the present.

Shutting down a company and some acrimonious board room discussion is hardly "catastrophic". And it can be the right move, if you think the danger exceeds the value the company is creating. 

Ie if a company makes nuclear power plants that are melt downs just waiting to happen, or kids toys full of lead or something, shutting that company down is a good move. 

Why Governments Can't be Trusted to Protect the Long-run Future

It could be an externality, if the land was randomly reassigned a new owner every year or something. But if the land is sold, that is taken into account. It isn't an externality. Capitalism has priced this effect in.

Why Governments Can't be Trusted to Protect the Long-run Future

For something that is long term, but only effects the property of 1 person, like the field example, the market prices it in and it's not an externality.

No one is significantly incentivized to stop climate change, because no one person bears a significant fraction of the damage caused.

Politicians are far from perfect, but at least they have any incentive to tackle these big problems at all.

Radical Energy Abundance

I have yet to see a good case against AI doom. 

The Next Einstein Could Be From Anywhere: Why Developing Country Growth Matters for Progress

I don't think most of these "next einstein" arguments prove what you think they do. 

If you want to increase the chances of string theory breakthroughs, you want to find the sort of people that have a high chance of understanding string theory, and push them even further. If any genetic component is relatively modest, then it becomes mostly pick someone, and throw lots of resources at educating them. If genetics or randomness control a lot, but are easily observed, then it's looking out for young maths prodigies and helping them. 

Ensuring widely spread education is more about the people making lots of small ideas rather than the lone geniuses. It's getting people from being peasant farmers to codemonkey programmers.

Small signs you live in a complacent society

Have you considered printing off a few sheets of paper, getting some glue, and just adding a few signs yourself? ;-)

Load More