Friendly reminder:

Scaling LLMs will not lead to AGI.

Kind of like saying that scaling the language area in a human brain won't lead to a human brain.

True, but just don't do that then.

Who attuned your crystal ball?

LLMs are already pretty general. They've got the multimodal ones, and aren't they using some sort of language-action-model to drive cars now? Who is to say AGI doesn't already exist?

It doesn't already exist, pretty obviously.

https://www.youtube.com/watch?v=YeRS4TbtZWA

It's a trick statement, because AGI is undefined.

I think LLMs are at least name-worthy given that they're artificial and somewhat smart in a generality of domains. Albeit the "smartness" comes from training in a massive corpus of text in those domains. So maybe it's really a specific intelegence but for so many specific tasks it seems general.

At some point you have to throw in the towel when these things are going to be walking and talking around us. Some people move the goalposts of "AGI" to mean that the machine totally emulates a person. Including curiosity and creativity, of which these models are currently lacking.

But why should it? In genesis, it's said that god created man after its own image. I have to assume this implies we inherit god's mental attributes (curiosity, creativity, etc.) rather than its physical attributes.