Greg Brockman (President of OpenAI) also said that OpenAI is around 80% close to achieving "AGI", but it was disclosed that his stake in OpenAI is worth around 30BN.

So what does the true definition of "AGI" actually mean? It depends on who you ask.

It appears to many to mean "A Great IPO" or "A Gigantic IPO" at this point rather than "Artificial General Intelligence" which has been clearly hijacked to mean something else.

> So what does the true definition of "AGI" actually mean? It depends on who you ask.

AGI - Automatically Generating Income

AGI - Ad-generated income.

> So what does the true definition of "AGI" actually mean?

No worries, there will be a startup creating "AGI Bench", >=80% means you're AGI, they will be valued $50B.

The ARC-AGI benchmark is basically this already

That is not at all the intention of the ARC team. By ARC teams definition, passing any single ARC-AGI benchmark does not mean that AGI has been achieved. Instead, AGI would be considered achieved when we are no longer able to come up with new benchmarks that the AI systems do not immediately do well on.

> So what does the true definition of "AGI" actually mean? It depends on who you ask.

When Greg Brockman makes a lot of money from the deal.

That's the trick right? What do they really mean by AGI. Depending on how narrow you go, it sounds like we've already achieved it. However, if they keep saying they'll achieve it and not defining it before making such statements that determine what it is, they can keep saying it endlessly to create hype.

One key thing I've heard about AGI which I think would be the most determining factor for me is a model that learns on the fly. Which could be done one way or another, but when you consider that LLMs basically run like "ROM" files, it makes it a little complicated.

I think we need to re-imagine how LLMs are built, train, and run. But also, figure out how to drastically lower the cost of running them.

I think they would not be LLMs then.

""Artificial General Intelligence" which has been clearly hijacked to mean something else."

I mean, the goalposts shifted. The game Go used to be considered to require true AI. Passing the turing test. Scanning, analyzing and improving complex codebases largely on their own would have been considered some sort of AGI by me 6 years ago.

Now sure, we all know they lack true understanding. But it gets blurry at times what that does mean.

But I don't buy that there will be a magic point, where self improving AGI explodes towards singularity. The current approach is very, very energy and compute intense and that is unlikely to change.

Maybe the dystopian AI development will result in energy funding and advancements that actually benefit most of us. I really hope all this turns out in a net positive for humanity. If we wont get true "AGI", which we are far far away from, we at least could make some advancements in different areas.

Well, I surely hope so, but I feel less positive if that means a nuclear power plant is parked before every new rushed datacenter

https://www.scmp.com/news/china/science/article/3351721/chin...

But in general I do believe AI has the potential to be a great positive for humanity on its own - if the open models stay strong and not only a few people control them.

I can see your reasoning. Unfortunately I see and experience everything wrong with AI in my daily life. People ask it what gifts to buy for their loved ones or use it as therapist substitute. Humans are not ready for this technology. A lot of us are even losing the ability to read properly (even though thats related to technology in general). It's extremely scary. The only advantage humans have is an extraordinary big brain and a pair of thumbs, we can't afford to use our brains less.

I mean, people are doing dumb shit since the beginning of times and I considered this society as messed up since way before LLMs.

And yes, humans as a whole are not even ready for cars or nuclear weapons. We build and used them anyway.

But my brain is still pretty busy and I don't think the younger generation is getting dumber because of LLMs, rather mindless consuming TikTok and co

LLMs are a also great learning tool and anyone using them should know their limits quickly. Not all do, though. That is obvious.

First do 80%, then do the remaining 150%, I would imagine

> So what does the true definition of "AGI" actually mean?

If your stake is > 30 billion seems more of a reasonable and realistic criteria to me.

AGI is defined as whatever it takes for stock holders to make $$$ I guess?

One of the random tidbits I can remember from the New Yorker Altman deep dive was Brockman being obsessed about making $1B dollars. It was memorable because I actually cringed reading it.