Ok. Firstly I do think your "Embodied information" is real. I just think it's pretty small. You need the molecular structure for 4 base pairs of DNA, and for 30 ish protiens. And this wikipedia page. https://en.wikipedia.org/wiki/DNA_and_RNA_codon_tables
That seems to be in the kilobytes. It's a rather small amount of information compared to DNA.
Epigenetics is about extra tags that get added. So theoretically the amount of information could be nearly as much as in the DNA. For example, methyization can happen on A and C, so that's 1 bit per base pair, in th... (read more)
https://www.lesswrong.com/posts/ZiRKzx3yv7NyA5rjF/the-robots-ai-and-unemployment-anti-faq
Once AI does get that level of intelligence, jobs should be the least of our concerns. Utopia or extinction, our future is up to the AI.
> It also seems vanishingly unlikely that the pressures on middle class jobs, artists, and writers will decrease even if we rolled back the last 5 years of progress in AI - but we wouldn't have the accompanying productivity gains which could be used to pay for UBI or other programs.
When plenty of people are saying that AGI is likely to cause human extinction, and the worst scenario you can come up with is middle class jobs, your side is the safe one.
I think your notion of "environmental progress" itself is skewing things.
When humans were hunter gatherers, we didn't have much ability to modify our surroundings.
Currently, we are bemoaning global warming, but if the earth was cooling instead, we would bemoan that too.
Environmentalism seems to only look at part of the effects.
No one boasts about how high the biodiversity is at zoos. No one is talking about cities being a great habitat for pigeons as an environmental success story.
The whole idea around the environmentalist movement ... (read more)
One system I think would be good is issue based voting.
So for example, there would be several candidates for the position of "health minister", and everyone gets to vote on that.
And independently people get to vote for the next minister for education.
Other interesting add ons include voting for an abstract organization, not a person. One person that decides everything is an option on the balot, but so are various organizations, with various decision procedures. You can vote for the team that listens to prediction markets, or even some sub... (read more)
Solving global warming
Most of the current attempts that interact with everyday people are random greenwashing trying to get people to sort recycling or use paper straws. Yes solar panel tech is advancing, but that's kind of in the background to most peoples day to day lives.
And all this goal is promising is that things won't get worse via climate change. It isn't actually a vision of positive change.
A future with ultra cheap energy, electric short range aviation in common use etc.
... (read more)Building true artificial intelligence (AGI, or artifi
within human laws
I have no idea what superintelligence following existing laws even looks like.
Take mind uploading. Is it
The current law is very vague with respect to tech that doesn't exist yet. And there are a few laws which, depending on interpretation, might be logically impossible to follow.
ASI by default pushes right to the edge of what is technically possible, not a good fit with vague rules.
So, even if Sam Altman would declare tomorrow that he has built a digital God and that he will roll it out for free to everyone, this would not immediately lead to full automation.
Superintelligent AI (whether friendly or not) may not feel obliged to follow human laws and customs that slow down regular automation.
>Throughout history, fearmongering has been used to justify a lot of extreme measures.
And throughout history, people have dismissed real risks and been caught with their pants down. What, in 2018 or Feb 2020 would appear to be pretty extreme measures at pandemic prevention would make total sense from our point of view.
Countries can and do spend a huge pile of money to defend themselves from various things. Including huge militaries to defend themselves from invasion etc.
All sorts of technologies come with various safety measures. ... (read more)
Well
>Invader countries have to defend their conquests and hackers need to have strong information security.
One place where offense went way ahead of defense is with nukes.
However nukes are sufficiently hard to make that only a few big powers have them. Hence balance of power MAD.
If destruction is easy enough, someone will do it.
In the war example, as weapon lethality went up, the fighters moved further apart. So long as both sides have similar weapons and tactics, there exists some range at which you aren't so close as to be instakilled, nor are you so far as to have no hope of attacking. This balance doesn't apply to civilian casualties.
The thing is, we have many options that aren't just accelerating or decelerating the whole thing. Like we can choose gain of function research and cutting edge AI capabilities, and accelerate everything except that.
Science is lots of different pieces, differential technological development.
"25% probability that the domain experts are right x 50% chance that it’s not too late for science to affect the onset of the
time of perils x 50% chance that science cannot accelerate us to safety = 6.25%"
This smells of the "multistage fallacy"
You think... (read more)
The thing about e/acc is it's a mix of the reasonable and the insane doom cult.
The reasonable parts talk about AI curing diseases ect, and ask to speed it up.
Given some chance of AI curing diseases, and some chance of AI caused extinction, it's a tradeoff.
Now where the optimal point of the tradeoff lands on depend on whether we just care about existing humans, or all potential future humans. And also on how big we think the risk of AI extinction is.
If we care about all future humans, and think ai is really dangerous, we get a "proceed wit... (read more)
I don't see any specific criticism of effective altruism other than "I don't like the vibes".
And the criticism from "acrimonious corporate politics".
"Helen Toner was apparently willing to let OpenAI be destroyed because of a general feeling that the organization was moving too fast or commercializing too much."
Between the two of them, a philosophy that aims to prevent catastrophic risk in the future seems to be creating its own catastrophes in the present.
Shutting down a company and some acrimonious board room discussion is hardly "catastrophic"... (read more)
It could be an externality, if the land was randomly reassigned a new owner every year or something. But if the land is sold, that is taken into account. It isn't an externality. Capitalism has priced this effect in.
For something that is long term, but only effects the property of 1 person, like the field example, the market prices it in and it's not an externality.
No one is significantly incentivized to stop climate change, because no one person bears a significant fraction of the damage caused.
Politicians are far from perfect, but at least they have any incentive to tackle these big problems at all.
I have yet to see a good case against AI doom.
I don't think most of these "next einstein" arguments prove what you think they do.
If you want to increase the chances of string theory breakthroughs, you want to find the sort of people that have a high chance of understanding string theory, and push them even further. If any genetic component is relatively modest, then it becomes mostly pick someone, and throw lots of resources at educating them. If genetics or randomness control a lot, but are easily observed, then it's looking out for young maths prodigies and helping them.
Ensuring widely s... (read more)
Have you considered printing off a few sheets of paper, getting some glue, and just adding a few signs yourself? ;-)
Some tech, like seatbelts, are almost pure good. Some techs, like nukes are almost pure bad. Some, like cars, we might want to wait until we develop seatbelts and traffic lights for before we use widely. It depends on the technology.
Elon musk is very good at making himself the center of as many conversations about technology as possible.
He should not be taken as a source of information of any reliability.
Living on mars with tech not too far beyond current tech is like living in antarctica today. It's possible, but it isn't clear why you would want to. A few researchers on a base, not much else.
Think ISS but with red dust out the windows.
At some point, which might be soon or not so soon, tech is advanced enough that it becomes easy to get to mars. But at t... (read more)
I don't think the "aside from the internet, nothing much". Firstly comuter and internet tech have been fairly revolutionary across substantial chunks of industry and our daily lives. This is the "a smartphone is only 1 device so doesn't count as much progress" thinking. Without looking at the great pile of abacuses and slide rules and globes and calculators and alarm clocks and puzzle toys and landline phones and cameras and cassette tapes and ... that it replaced and improved on.
Secondly, there are loads of random techs that were invented recently, ... (read more)
If longtermists existed back when blacks were widely regarded as morally inferior to whites, would the moral calculus of the longtermists have included the prosperity of future blacks or not? It seems like it couldn't possibly have included that. More generally, longtermism can't take into account progress in moral knowledge, nor what future generations will choose to value. Longtermists impose their values onto future generations.
It is true that we can't predict future moral knowledge. However.
I think this is presupposing the question isn't it.
If a risk is indeed very unlikely, then we will tend to overestimate it. (If the probability is 0 it's impossible to underestimate)
But for risks that are actually quite likely, then we are more likely to underestimate them.
And of course, bias estimates cut both ways. "Our primitive monkey brains are good at ignoring and underestimating abstract and hard to understand risks".