Hi Eli, I read your piece on the regulatory barriers to AI progress having material impacts on society. For me this pushes things in the direction of "we'll have more AI automation of AI R&D before big societal trends in job automation", which could imply faster AI progress generally if labs are focused more on their own AI -> research automation -> better AI feedback loop. I do think that an AI that could perform basically any jobs (not requiring hands) as well as a human for pennies on the dollar would radically transform society, but maybe we don't see as much change in AI systems until then. This Metaculus question (https://www.metaculus.com/questions/3698/when-will-an-ai-achieve-a-98th-percentile-score-or-higher-in-a-mensa-admission-test/) on when an AI will get a Mensa-worthy iq score (current prediction: April 2028) suggests to me that we're not far away from AGI. What do you think? My sense is that you're much less bullish on AI progress than e.g. the LessWrong or EA communities.
Huh, it's hard for me to imagine reaching a 98th-percentile IQ score without the ability to do lots of cognitive work (I'm not talking about some model fine-tuned on IQ tests or whatever, just a general language model that happens to score well on the test). I have different intuitions about the calculator example: the point I take away from it is...we use calculators all the time! I'm perfectly content calling calculators a transformative innovation, though these language models are already much more general than the calculator.
Re: "There is no real cognition going on inside ChatGPT. It is spitting out answers based on a statistical function trained on encoded inputs and outputs." This seems like a No True Scotsman that will keep you from noticing how their capabilities are improving. SSC's take on GPT-2 was good for this, and imo got extremely vindicated when the GPT family went from an interesting toy to being able to create real economic value.
Re: Have you read Gwern's stuff on machine learning scaling? All of the "we don't really understand it" takes on a very different tone when you read his "deep learning wants to work" take. A technique that AI researchers disdain because it doesn't match their love for theory, that works anyway, that then the whole SV community realizes is really promising...strikes me as something real and useful we accidentally discovered in the world. That we don't understand it doesn't stop it from working, that every basic little trick we try yields more fruit suggests that the fruit is really extremely low-hanging. For me, it's worrying because I think we need good theory to learn how to control it, but the basic case for this being a thing doesn't seem in question.