I have yet to see a good case against AI doom.
I don't think most of these "next einstein" arguments prove what you think they do.
If you want to increase the chances of string theory breakthroughs, you want to find the sort of people that have a high chance of understanding string theory, and push them even further. If any genetic component is relatively modest, then it becomes mostly pick someone, and throw lots of resources at educating them. If genetics or randomness control a lot, but are easily observed, then it's looking out for young maths prodigies and helping them.
Ensuring widely spread education is more about the people making lots of small ideas rather than the lone geniuses. It's getting people from being peasant farmers to codemonkey programmers.
Have you considered printing off a few sheets of paper, getting some glue, and just adding a few signs yourself? ;-)
Some tech, like seatbelts, are almost pure good. Some techs, like nukes are almost pure bad. Some, like cars, we might want to wait until we develop seatbelts and traffic lights for before we use widely. It depends on the technology.
Elon musk is very good at making himself the center of as many conversations about technology as possible.
He should not be taken as a source of information of any reliability.
Living on mars with tech not too far beyond current tech is like living in antarctica today. It's possible, but it isn't clear why you would want to. A few researchers on a base, not much else.
Think ISS but with red dust out the windows.
At some point, which might be soon or not so soon, tech is advanced enough that it becomes easy to get to mars. But at that point, traditional biological humans on mars might be stupid, compared to say self replicating robots containing computers running uploaded human minds in the asteroid belt.
A mars base is cool scifi. But it might turn into the largest white elephant in history. It doesn't solve any obvious practical purpose in increasing human wellbeing or industrial capability.
Sure, at some point you are disassembling all the planets to build a dyson sphere. But before that, a mars landing doesn't actually need to mean any real progress.
I don't think the "aside from the internet, nothing much". Firstly comuter and internet tech have been fairly revolutionary across substantial chunks of industry and our daily lives. This is the "a smartphone is only 1 device so doesn't count as much progress" thinking. Without looking at the great pile of abacuses and slide rules and globes and calculators and alarm clocks and puzzle toys and landline phones and cameras and cassette tapes and ... that it replaced and improved on.
Secondly, there are loads of random techs that were invented recently, solarPV, LED's. Mrna Vaccines. Electric (self driving?) cars.
And finally, a substantial part of progress is the loads of tiny changes that make things cheaper and better. If you don't include things like 3d-printers and drones that haven't really gotten good yet, then of course you will see less inventions recently. The first fridges were expensive and not that good either.
If longtermists existed back when blacks were widely regarded as morally inferior to whites, would the moral calculus of the longtermists have included the prosperity of future blacks or not? It seems like it couldn't possibly have included that. More generally, longtermism can't take into account progress in moral knowledge, nor what future generations will choose to value. Longtermists impose their values onto future generations.
It is true that we can't predict future moral knowledge. However.
This is an intresting new combination of standard mistakes.
Another issue is that if altruistic morality is taken to its logical conclusion, then everyone would be trying to solve everyone else's problems. How could that possibly be more effective than everyone trying to solve their own problems?
Altruistic morality in the total utilitarian sense would recognize that solving everyones problems is equally valuable, including our own. In the current world, practically no humans are going to put themselves lower than everyone else, and most of the best opportunities for aultruism are helping others. But in the hypothetical utopia land, people would solve their own problems, there being no more pressing problems to solve.
If we are here to help others, what on Earth are the others here for?
Well imagine the ideal end goal, if we develop some magic tech. Everyone living in some sort of utopia. At this point, most of the aultruists say that there is no one in the world who really needs helping, and just enjoy the utopia. But until then, they help.
What we actually need to be is selfish, not altruistic. We need to make as rapid progress as possible so that the people of the future themselves will be at a starting point where they can make even more rapid progress.
A aulturist argument for selfishness. You are arguing that selfishness is good because it benefits future people.
If you were actually selfish, you would be arguing that selfishness is good because it makes you happy, and screw those future people, who cares about them.
I also don't know where you got the idea that selfish=max progress.
Suppose I am a genius fusion researcher. (I'm not) I build fusion reactors so future people will have abundant clean energy. If I was selfish, I would play video games all day.
Altruism is subordinating one's own preferences to those of others. It’s a zero-sum game. It's not win-win.
In the ideal utilitarian hypothetical utopia, who exactly is loosing. If hypothetically everyone had the exact same goal, the well being of humanity as a whole, valuing their own well being at exactly the same level as everyone elses, that would be a 0 difference game, the exact opposite of a 0 sum game.
For something that is long term, but only effects the property of 1 person, like the field example, the market prices it in and it's not an externality.
No one is significantly incentivized to stop climate change, because no one person bears a significant fraction of the damage caused.
Politicians are far from perfect, but at least they have any incentive to tackle these big problems at all.