Founder, The Roots of Progress (rootsofprogress.org)
Hmm, I honestly don't know whether progress studies can be applied to any random job or company. I think of it more about applying at a society-wide level. Of course, it might inspire some people to take jobs at more ambitious / cutting-edge companies (or start such companies!) But that also doesn't mean there's anything wrong with companies that aren't cutting-edge—it takes all kinds of companies to make a functioning economy.
If anything, maybe progress studies can help remind you all of the moral value of economic growth. To the extent you all do you job well, and create economic value, and produce an honest profit—you are contributing to the well-being of the world. That makes it worth taking pride in a job well done. Trite but true.
Update: I’m already planning to give brief remarks at a few events coming up very soon:
If you’re in/near Bangalore, hope to see you there!
Fascinating, thanks for the pointer!
This book is “for babies” but it's probably just about right for a 3yo. It is the best “STEM for babies” book I have ever seen, maybe the only one I really like: https://computerengineeringforbabies.com/
I don't know exactly how seriously to take it—but I know Michael Dearing, whose site that is, and he is 100% in favor of capitalism, so… at least partially serious?
Thanks Gergő. We're doing this as a “work made for hire” meaning that the rights belong to us and we can then license it however we want.
Thanks a lot, Zvi.
Meta-level: I think to have a coherent discussion, it is important to be clear about which levels of safety we are talking about.
And the reason for this focus is:
With that context:
We might agree, at the extreme ends of the spectrum, that:
I feel like we are still at different points in the middle of that spectrum, though. You seem to think that the balancing of incentives has to be pretty careful, because some pretty serious power-seeking is the default outcome. My intuition is something like: problematic power-seeking is possible but not expected under most normal/reasonable scenarios.
I have a hunch that the crux has something to do with our view of the fundamental nature of these agents.
… I accidentally posted this without finishing it, but honestly I need to do more thinking to be able to articulate this crux.
Certainly. You need to look at both benefits and costs if you are talking about, for instance, what to do about a technology—whether to ban it, or limit it, or heavily regulate it, or fund it / accelerate it, etc.
But that was not the context of this piece. There was only one topic for this piece, which was that the proponents of AI (of which I am one!) should not dismiss or ignore potential risks. That was all.
I would call it metascience, and I would include Convergent Research and Speculative Technologies. See also this Twitter thread.
There is no history that I know of, it's almost too new for that. But here's an article: “Inside the multibillion-dollar, Silicon Valley-backed effort to reimagine how the world funds (and conducts) science”
I don't think there would be broad agreement within the progress community about the Singer argument, or more generally about utilitarianism.
Personally, I am neither a utilitarian nor an altruist, and I don't agree with the drowning child argument as I understand it.
I think how much to spend on yourself vs. charity or other causes that you believe in is a personal decision, based on what is meaningful and important to you.