All of occamsbulldog's Comments + Replies

Eli Dourado AMA

Huh, it's hard for me to imagine reaching a 98th-percentile IQ score without the ability to do lots of cognitive work (I'm not talking about some model fine-tuned on IQ tests or whatever, just a general language model that happens to score well on the test). I have different intuitions about the calculator example: the point I take away from it is...we use calculators all the time! I'm perfectly content calling calculators a transformative innovation, though these language models are already much more general than the calculator. 

Re: "There is no real... (read more)

Eli Dourado AMA

Hi Eli, I read your piece on the regulatory barriers to AI progress having material impacts on society. For me this pushes things in the direction of "we'll have more AI automation of AI R&D before big societal trends in job automation", which could imply faster AI progress generally if labs are focused more on their own AI -> research automation -> better AI feedback loop. I do think that an AI that could perform basically any jobs (not requiring hands) as well as a human for pennies on the dollar would radically transform society, but maybe we ... (read more)

3elidourado2yI think the kinds of tests that prove that a human is intelligent or sentient or whatever are not the same as the kinds of tests that prove a computer program is sentient. For example, imagine a test where we timed the test-taker on how long it takes to multiply two 8-digit numbers together. For most humans, this would take several minutes. For even a dollar-store calculator, it would take under a second. For many decades, Alan Turing's proposal that a computer that could converse indistinguishably from humans would be a sign of human-level sentience and intelligence was widely accepted. I myself thought, "Sure, sounds good," when I first heard of it. But actually, it turns out that carrying out a conversation for machines is easier than we thought. There is no real cognition going on inside ChatGPT. It is spitting out answers based on a statistical function trained on encoded inputs and outputs. I think it is quite possible that an AI will achieve a 98th percentile score on a Mensa test by 2028 (maybe earlier). What I don't think is that that will be a sign of human-level sentience or intelligence. It's a sign of being able to mimic a few salient aspects of human intelligence. To get to parity with human brain experience, we need several orders of magnitude higher computational efficiency to match neurons. We don't need to get there all the way on efficiency; we can do some by burning more energy. Even so, it will take a couple of decades in my estimation. And even so, there is still the possibility that we don't really understand how the neurons work and we could be way off base! Michael Levin has pointed out that a caterpillar essentially disassociates its brain to become a butterfly, and yet somehow it retains at least some memories. I think we are far from really grokking it.