Donald Hobson

Posts

Sorted by New

Wiki Contributions

Comments

The real data wall is billions of years of evolution

Ok. Firstly I do think your "Embodied information" is real. I just think it's pretty small. You need the molecular structure for 4 base pairs of DNA, and for 30 ish protiens. And this wikipedia page. https://en.wikipedia.org/wiki/DNA_and_RNA_codon_tables

That seems to be in the kilobytes. It's a rather small amount of information compared to DNA.

Epigenetics is about extra tags that get added. So theoretically the amount of information could be nearly as much as in the DNA. For example, methyization can happen on A and C, so that's 1 bit per base pair, in theory. 

Also, the structure of DNA hasn't changed much since early micro-organisms existed. Neither has a lot of the other embodied information. 

Therefore the information doesn't contain optimization over intelligence, because all life forms with a brain had the same DNA. 

 

Humans are better than LLM's at highly abstact tasks like quantum physics or haskel programming.

You can't argue that this is a result of billions of years of evolution. Sea sponges weren't running crude haskel programs a billion years ago. 

 

Therefore, whatever data the human brain has, it is highly general information about intelligence.

 

Suppose we put the full human genome, plus a lot of data about DNA and protein structure, into the LLM training data. In theory, the LLM has all the data that evolution worked so hard to produce. In practice, LLM's aren't smart enough to come up with fundamental insights about the nature of intelligence from the raw human genome.

 

So there is some piece of data, with a length between a few bits and several megabytes, that is implicitly encoded in the human genome, and that describes an algorithm for higher intelligence in general. 

 If it’s a collection of millions of unintelligible interacting “hacks” tuned to statistical properties of the environment, then maybe not.

 

Well those "hacks" would have to generalize well. Modern humans operate WAY out of distribution and work on very different problems. 

Would interacting hacks that were optimized to hunt mammoths also happen to work in solving abstract maths problems? 

So how would this work. There would need to be a set of complicated hacks that work on all sorts of problems, including abstract maths. Abstract maths has limitless training data in theory. And if the hacks apply to all sorts of problems, then data on all sorts of problems is useful in finding the hacks. 

If the hacks contain a million bits of information, and help answer a million true/false questions, then they are in principle findable with sufficient compute. 

 

Also, bear in mind that evolution is INCREADIBLY data inefficient. Yes there are a huge number of ancestors. But evolution only finds out how many children got produced. A human can look at a graph and realize that a 1% increase in parameter X causes a 1% improvement in performance. Evolution randomly makes some individual with 1% more X, and they get killed by a tiger. Bad luck. 

And again. Most of the billions of years there were no brains at all. The gap between humans and monkeyish creatures is a few Million years. 

AIXI is a theoretical model of an ideal intelligence, it's a few lines of maths.

I'm not saying it's totally impossible that there is some weird form of evolution data wall. But mostly this looks like a fairly straightforward insight, possessable, and not possessed by us. I think it's pretty clear that the human algorithm makes at least a modest amount of sense and isn't too hard to find with trial and error on the same training dataset. (When the dataset is large, and the amount of outer optimization is fairly modest, the risk of overfitting in the outer stage is small)

Will we ever run out of new jobs?

https://www.lesswrong.com/posts/ZiRKzx3yv7NyA5rjF/the-robots-ai-and-unemployment-anti-faq

 

Once AI does get that level of intelligence, jobs should be the least of our concerns. Utopia or extinction, our future is up to the AI.

Safe Stasis Fallacy

 > It also seems vanishingly unlikely that the pressures on middle class jobs, artists, and writers will decrease even if we rolled back the last 5 years of progress in AI - but we wouldn't have the accompanying productivity gains which could be used to pay for UBI or other programs.  

 

When plenty of people are saying that AGI is likely to cause human extinction, and the worst scenario you can come up with is middle class jobs, your side is the safe one. 

The Case for Positive-Sum Environmentalism

I think your notion of "environmental progress" itself is skewing things. 

When humans were hunter gatherers, we didn't have much ability to modify our surroundings. 

Currently, we are bemoaning global warming, but if the earth was cooling instead, we would bemoan that too. 

Environmentalism seems to only look at part of the effects.

No one boasts about how high the biodiversity is at zoos. No one is talking about cities being a great habitat for pigeons as an environmental success story. 

The whole idea around the environmentalist movement is the naturalistic fallacy turned up to 11. Any change made by humans automatically becomes a problem. 

It's goal seems to be "make the earth resemble what it would look like had humans never existed". 

(Name one way humans made an improvement to some aspect of the environment compared to what it was a million years ago) 

A goal that kind of gets harder by default as humanities ability to modify the earth increases. 

What if government worked like Wikipedia?

One system I think would be good is issue based voting.

 

So for example, there would be several candidates for the position of "health minister", and everyone gets to vote on that. 

And independently people get to vote for the next minister for education.

 

Other interesting add ons include voting for an abstract organization, not a person. One person that decides everything is an option on the balot, but so are various organizations, with various decision procedures. You can vote for the team that listens to prediction markets, or even some sub-democracy system. (Because the organizations can use arbitrary mechanisms, including more votes, teams of people, whatever they like)

 

 

Approval voting is good.

An interesting option is to run a 1-of-many election. 

So you can cast a ballot in the health-election or in the education election or in the energy election or ..., depending on which issue you feel most strongly about. (But you can't vote on both issues at one time.) This has a nice property that the fewer people care about a topic, the further your vote goes if you decide to vote on that topic. 

The Bridge to Boundless Human Development

Solving global warming

Most of the current attempts that interact with everyday people are random greenwashing trying to get people to sort recycling or use paper straws. Yes solar panel tech is advancing, but that's kind of in the background to most peoples day to day lives. 

And all this goal is promising is that things won't get worse via climate change. It isn't actually a vision of positive change. 

A future with ultra cheap energy, electric short range aviation in common use etc. 

 

Building true artificial intelligence (AGI, or artificial general intelligence)

 

Half the experts are warning that this is a poisoned chalice. Can we not unite towards this goal until/unless we come to a conclusion that the risk of human extinction from AGI takeover is low. 

 

Also, if we do succeed in AGI alignment, the line from AGI to good things is very abstract. 

What specific nice thing will the AGI do?  (The actual answer is also likely to be a bizarre world full of uploaded minds or something. Real utopian futures aren't obliged to make sense to the average person within a 5 minute explanation.) 

Colonizing Mars

Feels like a useless vanity project. (See the moon landings. Lots of PR, not much practical benefit.)

 

How about something like curing aging? Even the war on cancer was a reasonable vision of a positive improvement. 

Dude, where is my self-driving train?

within human laws

I have no idea what superintelligence following existing laws even looks like. 

Take mind uploading. Is it 

  1. Murder
  2. A (currently unapproved) medical procedure
  3. Not something the law makes any mention of, so permitted by default. 

The current law is very vague with respect to tech that doesn't exist yet. And there are a few laws which, depending on interpretation, might be logically impossible to follow. 

 

ASI by default pushes right to the edge of what is technically possible, not a good fit with vague rules. 

Dude, where is my self-driving train?

So, even if Sam Altman would declare tomorrow that he has built a digital God and that he will roll it out for free to everyone, this would not immediately lead to full automation.

Superintelligent AI (whether friendly or not) may not feel obliged to follow human laws and customs that slow down regular automation.  

Don’t be scared by the AI fearmongering.

>Throughout history, fearmongering has been used to justify a lot of extreme measures.

And throughout history, people have dismissed real risks and been caught with their pants down. What, in 2018 or Feb 2020 would appear to be pretty extreme measures at pandemic prevention would make total sense from our point of view.

 

Countries can and do spend a huge pile of money to defend themselves from various things. Including huge militaries to defend themselves from invasion etc. 

 

All sorts of technologies come with various safety measures. 

 

>For a more concrete example, one could draw the possibility that video games might cause the players to emulate behaviors, even though you have to be insane to believe the video games are real, to then start advocating for bans of violent video games. However, one could go a step further and say that building games could also make people believe that it’s easy to build things, leading to people building unsafe houses, and what about farming games, or movies, or books?

 

If you are unable to distinguish the arguments for AI risk from this kind of rubbish, that suggests either you are unable to evaluate argument plausibility, or you are reading a bunch of strawman arguments for AI risk.

>The community wants you to believe in a very pessimistic version of the world where all the alignment ideas don’t work, and AI may suddenly be dangerous at any time even when their behaviors look good and they’re constantly reward for their good behaviors?

I do not know of any specific existing alignment protocol that I am convinced will work. 

And again, if the reward button is pressed every time the AI does nice things, there is no selection pressure one way or the other between an AI that wants nice things, and one that wants to press the reward button. The way these "rewards" in ML work is similar to selection pressure in evolution. And humans were selected on to enjoy sex so they produced more babies, and then invented contraception. And this problem has been observed in toy AI problems too. 

 

This isn't to say that there is no solution. Just that we haven't yet found a solution.

>The AI alignment difficulty lies somewhere on a spectrum, yet they insist to base the policy on the idea that AI alignment lies somewhere in a narrow band of spectrum that somehow the pessimistic ideas are true, yet we can somehow align the AI anyway, instead of just accepting that humanity’s second best alternative to survival is to build something that will survive and thrive, even if we won’t?

We know alignment isn't super easy, because we haven't succeeded yet. We don't really know how hard it is. 

Maybe it's hopelessly hard. But if your giving up on humanity before you spend 10% of GDP on the problem, your doing something very wrong. 

Think of a world where aliens invaded, and the government kind of took a few pot shots at them with a machine gun, and then gave up. After all, the aliens will survive and thrive even if we don't. And mass mobilization, shifting to a wartime economy... those are extreme measures.

The Offense-Defense Balance Rarely Changes

Well 

>Invader countries have to defend their conquests and hackers need to have strong information security.

One place where offense went way ahead of defense is with nukes. 

However nukes are sufficiently hard to make that only a few big powers have them. Hence balance of power MAD. 

If destruction is easy enough, someone will do it.

In the war example, as weapon lethality went up, the fighters moved further apart. So long as both sides have similar weapons and tactics, there exists some range at which you aren't so close as to be instakilled, nor are you so far as to have no hope of attacking. This balance doesn't apply to civilian casualties. 

Load More