Counterargument #1: it might be better to incentivize people to run actual companies that deliver business value.
Response: there is so much value to be created that is hard to capture via current market structures. There are many people passionate about things that fall into this category.
There should be some kind of official recognition + prize for people providing public goods on the Internet. There exist prizes for free software and open-source projects, but this does not cover even remotely the amount of intangible value people can deliver on the Internet.
Examples include: https://avherald.com, but also a lot of open-source projects. and maybe people like patio11, gwern, Lilian Weng, or Bartosz Ciechanowski. Some YouTubers would also likely qualify, but I'm not very familiar with the medium.
Theory of change: just increase the rate of reminders to people that if they are highly competent and passionate about something not directly marketable, the Internet has made it so that they can easily increase the amount of beauty in the world by making their passion a public project.
It's easy to disprove an equal distribution; however, it's also very easy to disprove a distribution that closely fits opportunities (say, measured by economic development).
I'd also like to note that IMO performance is a strong but quite noisy signal of top talent distribution, due to some countries' educational and career systems not particularly caring about it (France comes to mind); some countries kneecapping their performance on purpose (China doesn't let anyone participate twice), and the cultural importance of high-school competitions varying between countries.
The entire argument rests on current nuclear arsenals being powerful enough to kill all people right now, but the author does not cite a single link supporting that key assertion. And for a good reason: even pessimistic scientific estimates say around 40% of people would survive.
This forum and the movement in general are not really popular outside of some very small elite circles. I hope that changes and you manage to propagate the memes more widely. The normalization of declinism and romanticism^1 in most mainstream communities creates communication barriers and reduces the ability of society to make positive actions.
However, I am against progress maximalism as expressed on this forum. Let me elaborate my view, parts of which have definitely been expressed before:
The ideas of The Roots of Progress would be obviously correct in a world slightly different than ours.
I see "progress" as mining knowledge, enabling technology. There is no creation; every idea is waiting to get discovered, validated and applied. If something would improve our lives, it is indeed a moral imperative to “get it out of the ground”.
Almost all positive things that happened to humanity so far stem from mining technological progress (fire, agriculture, domestication of horses, fossil fuels, electricity, extermination of pathogens, contraception, computers, Internet), and the rest being some progress in social and political technology (religion, morals, basic freedoms, democracy, rule of law), with all of it being enabled by communication technology (language, writing, books, Internet platforms). We will make progress on all of the above in the future.
Of course, there is often negative "environmental impact”, coming from disruption or negative externalities . But in the end we always solve this by creating better social or scientific technologies, or the problems just disappear as people adapt.
Progress happens by mining for knowledge and applying that in the world. Some great people create new mines, others improve processes of existing mines. We need multiple mines doing their things, bringing about different ways to do stuff. Understanding the mechanisms behind mine creation and preservation will obviously give returns far exceeding the invested effort, if we leverage this knowledge to improve mining.
Unfortunately we find ourselves in a different world. A few years back we discovered mithril under a single mountain, which looks like it will make all but a tiny number of other ores obsolete in less than half of a human lifetime. Moreover, it is inherently easier to extract than what we are used to in other mines.
The obvious goal missing from the progress framework is not to improve processes in all mines everywhere, it's not even to optimize the mithril mining process. It is to make sure there are no demons of the ancient world^2 waiting inside.
The progress movement is nevertheless a positive thing, for various reasons:
But we must not forget that the following two bitter statements look more and more true each day:
^1: Romanticism of the past, or of nature, the status quo, or of anything that only looks nice but falls apart when faced with the test of “does moving in that direction really make our lives better?”
^2: It also makes sense to fight smaller negative externalities, because when the technology is so powerful, the sheer speed of deployment might overwhelm defensive mechanisms against irresponsible and bad-faith use, and be too fast for people to adapt while retaining sanity.
[Epistemic status: medium, really not an expert]
I see that and many other criticisms as more of an indicment of the lack of progress in spirituality, which is supposed to give people purpose and comfort. The old institutions are slow at adapting to the changing world, and some of the new institutions are simply not good at comforting.
I’m claiming that science is getting harder, in the sense that it is increasingly challenging to make discoveries that have comparable impact to the ones in the past.
How does this square with the 2012-2022 machine learning push? The groundbreaking papers are not particularly impressive from a technical standpoint; in fact it's a well-known meme that machine learning research is quite simple compared to other mathy academic areas. And the impact potential is far beyond any plausible predictions from 10 years ago.
Maybe this is true for most non-ML sciences? But advances in machine learning are already obsoleting decades of work in some other fields of science, and there are reasons to believe this trend will continue.
Only last year science has done several impossible things. I agree with the analyses on academic metrics, but claims about science being slow should have some slightly stronger supporting evidence.
EDIT: Oops, didn't see the that the original date is in June 2022, when some of the supporting arguments for my case were not yet available. But the comments still stand; I do not think the assertion in the title is true in the most straightforward interpretation.
Could you give a prediction of the form "in 2040, there will exist people which are more efficient at skill X than the best AI models" in which you are more confident than not? What about 2030 or 2050?
(Don't take this in bad faith, I have no intention of going back and mocking anyone's predictions; but there is very useful signal in correct answers and I'm curious why more people don't offer takes on this.)
Counterargument #2: AI obsoletes creation of cool stuff on the Internet.
Response: on the contrary, in many possible futures (esp. with constraints on agency), AI empowers people to deliver beauty to others, by automating all except the things they are passionate about. Motivation becomes more of a bottleneck.
Also, these types of public goods are some of the things that make me most proud of the current human civilization. I'm sure many here will agree. Even if we lose that in the future, I think it still matters, even as some sort of nod to the things we used to value in the past.