jasoncrawford

Founder, The Roots of Progress (rootsofprogress.org)

Wiki Contributions

Load More

Comments

Does the Progress Movement agree with Singer's Drowning Child Argument?

I don't think there would be broad agreement within the progress community about the Singer argument, or more generally about utilitarianism.

Personally, I am neither a utilitarian nor an altruist, and I don't agree with the drowning child argument as I understand it.

I think how much to spend on yourself vs. charity or other causes that you believe in is a personal decision, based on what is meaningful and important to you.

How to help if you work in average company?

Hmm, I honestly don't know whether progress studies can be applied to any random job or company. I think of it more about applying at a society-wide level. Of course, it might inspire some people to take jobs at more ambitious / cutting-edge companies (or start such companies!) But that also doesn't mean there's anything wrong with companies that aren't cutting-edge—it takes all kinds of companies to make a functioning economy.

If anything, maybe progress studies can help remind you all of the moral value of economic growth. To the extent you all do you job well, and create economic value, and produce an honest profit—you are contributing to the well-being of the world. That makes it worth taking pride in a job well done. Trite but true.

Jason Crawford in Bangalore, August 21 to September 8

Update: I’m already planning to give brief remarks at a few events coming up very soon:

If you’re in/near Bangalore, hope to see you there!

Matt Ritter's Shortform

This book is “for babies” but it's probably just about right for a 3yo. It is the best “STEM for babies” book I have ever seen, maybe the only one I really like: https://computerengineeringforbabies.com/ 

6 Minute Capitalist Meditation

I don't know exactly how seriously to take it—but I know Michael Dearing, whose site that is, and he is 100% in favor of capitalism, so… at least partially serious?

Wanted: Technical animator and/or front-end developer for interactive diagrams of invention

Thanks Gergő. We're doing this as a “work made for hire” meaning that the rights belong to us and we can then license it however we want.

If you wish to make an apple pie, you must first become dictator of the universe [draft for comment]

Thanks a lot, Zvi.

Meta-level: I think to have a coherent discussion, it is important to be clear about which levels of safety we are talking about.

  • Right now I am mostly focused on the question of: is it even possible for a trained professional to use AI safely, if they are prudent and reasonably careful and follow best practices?
  • I am less focused, for now, on questions like: How dangerous would it be if we open-sourced all models and weights and just let anyone in the world do anything they wanted with the raw engine? Or: what could a terrorist group do with access to this? And I am not right now taking a strong stance on these questions.

And the reason for this focus is:

  • The most profound arguments for doom claim that literally no one on Earth can use AI safely, with our current understanding of it.
  • Right now there is a vocal “decelerationist” group saying that we should slow, pause, or halt AI development. I think this argument mostly rests on the most extreme and IMO least tenable versions of the doom argument.

With that context:

We might agree, at the extreme ends of the spectrum, that:

  • If a trained professional is very cautious and sets up all of the right goals, incentives and counter-incentives in a carefully balanced way, the AI probably won't take over the world
  • If a reckless fool puts extreme optimization pressure on a superintelligent situationally-aware agent with no moral or practical constraints, then very bad things might happen

I feel like we are still at different points in the middle of that spectrum, though. You seem to think that the balancing of incentives has to be pretty careful, because some pretty serious power-seeking is the default outcome. My intuition is something like: problematic power-seeking is possible but not expected under most normal/reasonable scenarios.

I have a hunch that the crux has something to do with our view of the fundamental nature of these agents.

… I accidentally posted this without finishing it, but honestly I need to do more thinking to be able to articulate this crux.

A plea for solutionism on AI safety

Certainly. You need to look at both benefits and costs if you are talking about, for instance, what to do about a technology—whether to ban it, or limit it, or heavily regulate it, or fund it / accelerate it, etc.

But that was not the context of this piece. There was only one topic for this piece, which was that the proponents of AI (of which I am one!) should not dismiss or ignore potential risks. That was all.

Load More