I'm sorry what even is this? Giving $10k rewards for significant advancements toward "AGI"?

What does "making a framework" even mean, it feels like a nothing post.

When I think of what real AGI would be I think:

- Passes the turing test

- Writes a New York Times Bestseller without revealing it was written by AI

- Writes journal articles that pass peer review

- Wins a Nobel Prize

- Writes a successful comedy routine

- Creates a new invention

And no, nobody is going to make an automated kaggle benchmark to verify these. Which is fine, because an LLM will never be AGI. An LLM can't even learn mid-conversation.

Why does your definition of "AGI" have to exclude nearly all humans? Wouldn't it still be "AGI" if it was as smart as the average human? Since when did AGI stop representing the words that make term? Artificial (man made) General (not specific) Intelligence. Is a human not "GI"!?

Well it's my feeling that I could do most of those things if I was given infinite time, and for all intents and purposes an AI isn't limited to human time since it can be run in 1,000x parallelism 24/7.

Like for writing a best-seller, these AIs have so many advantages in that they've read every notable work ever, so if they can't craft something impressive and creative after all that then it's really indicative that they are actually quite below human on the creativity/writing side but just masking it on the massive-data-side.

Or put another way -- it's not really AGI until there is a model can learn at human speeds, no amount of being pre-trained on specific problem sets (e.g. human emotions, coding, math theorems, etc) will close that gap.

I get the feeling that the original post was also written using LLMs, it doesn’t make a lot of sense.

If an LLM like this is really intelligent, at the very least, I’d expect it to be able to invent.

For example, train an LLM on a dataset only containing knowledge from before nuclear energy was invented, and see if it can invent nuclear energy.

But that’s the problem: they’re not really training the model on intelligence, they’re training it on knowledge. So if you strip away the knowledge, you’re left with almost nothing.

>> An LLM can't even learn mid-conversation.

There’s an implicit assumption that scaling text models alone gets us to human-like intelligence, but that seems unlikely without grounding in multiple sensory domains and a unified world model.

What’s interesting is that if we do go down that route successfully, we may get systems with something like internal experience or agency. At that point, the ethical frame changes quite a bit.

They’re slowly redefining AGI so they can use it for more marketing. If you showed someone from 1960 our LLMs from and told them “this is AI” I think they’d be astounded but a little confused because “artificial intelligence” definitely carried a very clear meaning in literature and media. Now it is marketing terminology and we’re no closer to having a meaningful definition for the word intelligence.

AI has been consistently defined as "anything we can't make a computer do yet" since 1970.

https://quoteinvestigator.com/2024/06/20/not-ai/

> They’re slowly redefining AGI so they can use it for more marketing.

If they don't do that then those trillions of dollars that support their current share price will most probably evaporate, so there are very big incentives for them to just outright try and re-create reality (like what we usually meant when we were thinking about artificial intelligence).

I find it very interesting about the Turing test that as chatbots improve, so do humans get better at recognizing them.

Grok recently created a cancer vaccine for a dog that reduced tumor size by 75%

Severely misleading statement.