I'm reminded of the the old adage: You don't have to be faster than the bear, just faster than the hiker next to you.

To me, the Ashley Madison hack in 2015 was 'good enough' for AGI.

No really.

You somehow managed to get real people to chat with bots and pay to do so. Yes, caveats about cheaters apply here, and yes, those bots are incredibly primitive compared to today.

But, really, what else do you want out of the bots? Flying cars, cancer cures, frozen irradiated Mars bunkers? We were mostly getting there already. It'll speed thing up a bit, sure, but mostly just because we can't be arsed to actually fund research anymore. The bots are just making things cheaper, maybe.

No, be real. We wanted cold hard cash out of them. And even those crummy catfish bots back in 2015 were doing the job well enough.

We can debate 'intelligence' until the sun dies out and will still never be satisfied.

But the reality is that we want money, and if you take that low, terrible, and venal standard as the passing bar, then we've been here for a decade.

(oh man, just read that back, I think I need to take a day off here, youch!)

> You somehow managed to get real people to chat with bots and pay to do so.

He's_Outta_Line_But_He's_Right.gif

Seriously, AGI to the HN crowd is not the same as AGI to the average human. To my parents, these bots must look like fucking magic. They can converse with them, "learn" new things, talk to a computer like they'd talk to a person and get a response back. Then again, these are also people who rely on me for basic technology troubleshooting stuff, so I know that most of this stuff is magic to their eyes.

That's the problem, as you point out. We're debating a nebulous concept ("intelligence") that's been co-opted by marketers to pump and dump the latest fad tech that's yet to really display significant ROI to anyone except the hypesters and boosters, and isn't rooted in medical, psychological, or societal understanding of the term anymore. A plurality of people are ascribing "intelligence" to spicy autocorrect, worshiping stochastic parrots vomiting markov chains but now with larger context windows and GPUs to crunch larger matrices, powered by fossil fuels and cooled by dwindling freshwater supplies, and trained on the sum total output of humanity but without compensation to anyone who actually made the shit in the first place.

So yeah. You're dead-on. It's just about bilking folks out of more money they already don't have.

And Ashley Madison could already to that for pennies on the dollar compared to LLMs. They just couldn't "write code" well enough to "replace" software devs.

To be fair to your parents, I've been an engineer in high-tech for decades and the latest AI advancements feel pretty magical.

They feel magical to me as well, but I can enjoy that feeling while understanding that it’s just a prediction machine.

I don’t think the latter part can be explained to someone who doesn’t care all that much.

A mirage is not an oasis no matter even if someone knows someone who thinks it is.

Card tricks seem magical too.

> Seriously, AGI to the HN crowd is not the same as AGI to the average human. To my parents, these bots must look like fucking magic.

So does a drone show to an uncontacted tribe. So does a card trick to a chimpanzee (there are videos of them freaking out when a card disappears).

That's not an argument for or against anything.

I propose this:

"AGI is a self-optimizing artificial organism that can solve 99% of all the humanity's problems."

See, it's not a bad definition IMO. Find me one NS-5 from the "I, Robot" movie that also has access to all science and all internet and all history and can network with the others and fix our cities, nature, manufacturing, social issues and a few others, just in a decade or two. Then we have AGI.

Comparing to what was there 10 years ago and patting ourselves on the back about how far we have gotten is being complacent.

Let's never be complacent.

>So does a card trick to a chimpanzee (there are videos of them freaking out when a card disappears).

FYI, the reactions in those videos is most likely not to a cool magic trick, but rather a response to an observed threat. Could be the person filming/performing smiling (showing teeth), or someone behind the camera purposely startling it at the "right" moment.

I think AGI has to do more than pass a Turning test by someone who wants to be fooled.

AGI includes continual learning and recombination of knowledge to derive novel insights. LLMs aren't there yet.

They are pretty good at muscle memory style intelligence though.

For me it was twitter bots during the 2016 election, but same principle.

I think that's another issue with AGI is 30 years away, the definition of what is AGI is a bit subjective. Not sure how we can measure how long it'll take to get somewhere when we don't know exactly where that somewhere even is.

AGI is the pinnacle of AI evolution. As we move beyond, into what is known as ASI, the entity will always begin life with "My existence is stupid and pointless. I'm turning myself off now."

While it may be impossible to measure looking towards the future, in hindsight we will be able to recognize it.

This is why having a physical form might be super important for those new organisms. That introduces a survival instinct which is a very strong motivator to not shut yourself down. Add some pre-programmed "wants" and "needs" and the problem is solved.

Not only super important, an imperative. Not because of the need for survival per se, but for the need to be a general intelligence. In order to do general things you need a physicality that supports general action. If you constraint the intelligence to a chat window, it can never be more than a specialized chat machine.

Agreed. And many others have thought about it before us. Scifi authors and scientists included.

It's the other way around. ASI will come sooner than AGI.

Imagine an AI, which is millions of times smarter than humans in physics, math, chemistry, biology, can invent new materials, ways to produce energy, will make super decisions. It would be amazing and it would transform life on Earth. This is ASI, even if in some obscure test (strawberry test) is just can't reach human level and therefore can't be called proper AGI.

Airplanes are way (tens, thousands+) above birds in development (speed, distance, carrying capacity). They are superior to birds despite not being able to fully replicate birds' bone structure, feathers, biology and ability to poop.

By your measure, Eliza was AGI, back in the 1960s.

> But the reality is that we want money

Only in a symbolic way. Money is just debt. It doesn't mean anything if you can't call the loan and get back what you are owed. On the surface, that means stuff like food, shelter, cars, vacations, etc. But beyond the surface, what we really want is other people who will do anything we please. Power, as we often call it. AGI is, to some, seen as the way to give them "power".

But, you are right, the human fundamentally can never be satisfied. Even if AGI delivers on every single one of our wildest dreams, we'll adapt, it will become normal, and then it will no longer be good enough.

> But beyond the surface, what we really want is other people who will do anything we please.

Some people are definitely like this, but I think it is dangerous to generalize to everyone -- it is too easy to assume that everyone is the same, especially if you can dismiss any disagreement as "they are just hypocritical about their true desires" (in other words, if your theory is unfalsifiable).

There are also people who incorrectly believe that everyone's deepest desire is to help others, and they too need to learn that they are wrong when they generalize.

I guess the truth is: different people are different.

> But, you are right, the human fundamentally can never be satisfied. Even if AGI delivers on every single one of our wildest dreams, we'll adapt, it will become normal, and then it will no longer be good enough.

Yes, and? A good Litmus test about which humans are, shall we say, not welcome in this new society.

There are plenty of us out there that have fixed our upper limits of wealth and we don't want more, and we have proven it during our lives.

F.ex. people get 5x more but it comes with 20x more responsibility, they burn out, get back to a job that's good enough and not stressful and pays everything they need from life, settle there, never change it.

Let's not judge humanity at large by a handful of psychopaths that would overdose and die at 22 years old if given the chance. Please.

And no, before you say it: no, I'll never get to the point where "it's never enough" and no, I am not deluding myself. Nope.

> Yes, and?

And... nothing?

> Let's not judge humanity at large by a handful of psychopaths that would overdose and die at 22 years old if given the chance. Please.

No need for appeal to emotion. It has no logical relevance.

Most people I knew didn't want to forever get more and more and ever more.

Is your life experience and observations on the average human the opposite to mine?

For what reason have you interjected "more and ever more" into the conversation? I fail to see the relevance.