> The AI effect occurs when onlookers discount the behavior of an artificial intelligence program as not "real" intelligence.[1]

> Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'."[2] Researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"[3]

https://en.wikipedia.org/wiki/AI_effect

I don't know how many times I've posted this now, and how many times I'll have to continue posting it in the future, because it's a very real psychological phenomenon that I can observe in real time among people, such as the author of this article.

It would have been better if the term used was Prediction Machine, rather than Artificial Intelligence.

One may argue, what's in the name? AI is a catchier term and covers the umbrella under which terms like - statistics & predictions, ML, NLP AGI, etc can be grouped.

* As an engineer it makes sense since the ultimate aim should be to reach AGI.

* As a salesperson it makes selling the technology easier.

* As for the rest of the world, the term has a varying meaning, one that is limited by only the individual's imagination.

  - My wife, who has a PhD in Biotechnology believes AI to be more like a sentient being.

  - My sales director feels AI should be able to act like a real salesperson, taking vague instructions, researching and churning presentations to hit the customer's heart. 
The author is right here - the term is misused and misunderstood by most folks, and we should expose the system for what it actually is rather than what it can be.

This is a meme that needs to die. It's not insightful or interesting and just muddies the waters in a way that confuses the issue for everybody.

Look, nobody (for the most part) is claiming that current AI systems are "there yet" as far as being fully equal to human intelligence. That's what makes this whole argument useless... it's basically a straw-man argument.

OTOH, saying that artificial intelligence systems aren't "intelligent" to a point because they don't do exactly what humans do strikes me as roughly equivalent to saying that "airplanes don't really fly because they don't flap their wings like birds. They're just mimicking actual flight."

Of course AI is intelligent, it just isn't done developing yet. An apt comparison might be a precocious (and something peculiar, in this case) child.

And as an additional side-note: this article seems to conflate "AI" and "Generative AI" / LLM's as being the same thing. But that's not right - Generative AI / LLM's are just a subset of AI techniques and technologies that exist. Yes, GenAI/LLM is the current "new hotness" and the trendy thing everybody is talking about, but that doesn't excuse completely ignoring the distinction between "all of AI" and "Generative AI".

How are LLMs intelligent? Point me to the intelligence?

Literally it’s a black box of statistics. Tokens in, Next token probabilities out

Literally it’s a black box of statistics. Tokens in, Next token probabilities out

You literally just repeated the exact same mistake, albeit in different words. This is tantamount to speaking of airplanes: "It's not flight, it's just a bunch of fluid mechanics". You're focusing on the internal implementation details, not the expressed behavior.

I don't know if people do this out of some sort of "biological chauvinism" or what, but it strikes me as rather odd.

I can (to pick one random example out of many) ask ChatGPT "Who is the current Prime Minister of Japan and what is the largest prime number smaller than their age?" and get a correct answer. To say that a system that can pull that feat off isn't intelligent boggles the imagination IMO. Token prediction? Who cares. Hallucinations? Humans hallucinate also. Perhaps less frequently, but all that says is that we aren't "all the way there" yet with AI.

LLM is intelligent like a human, book, cat, thermostat, or grasshopper is intelligent.

But it's not a human intelligence, and it's creepy when its creators dress it up to pretend it is. It's an extremely superficial imitation of a human, and it doesn't behave like a human. It's like when a human furry claims to really be a dog or an otherkin.

Humans aren't the only intelligence though. Just because we "dress it up" to, well, interact with us humans doesn't mean it's not intelligent. I mean, what else would you expect them to do, make it emit binary?

"LLM is intelligent like a human, but it's not, and don't pretend it is"

is probably the best summary I've read on the topic

The conspiracist in me believes that they want people to think it’s human like and actually sentient so they can argue that what they’re doing is fair use. It’s just learning like humans learn!

As you said, it boggles imagination, because it feels like chatting with a human. But "having knowledge" does not equal "be intelligent". Would you say a dictionary is intelligent? GPT is just a human-friendly way to access, condensate and present to you information already present on the web. It saves you hours of web search and good old notes taking. That's cool, but that's not intelligence.

How are humans intelligent? Point me to the intelligence?

Literally they're just meat. Sensory inputs in, electrical signals out.

How is the I Ching intelligent, etc.

Point me to human intelligence. Not a mimicry of intelligence, actual intelligence. No, solipsism doesn't count.

Humans learn dynamically and can apply patterns to problems across domains. I think these are a couple key capabilities we have that LLM's don't have (for example).

>Humas learn dynamically

In-context learning (Few Shot Prompting) is absolutely demonstrable by LLMs since GPT 3.5 at least.

Yes, ICL demonstrates some capability, but is still extremely limited compared to a human's ability. It's an interesting baby step at this point.

Why do you think only LLMs are the pinnacle of AI today?

> Why do you think only LLMs are the pinnacle of AI today?

I don't think that but you're right that I was partially countering a pro-LLM argument that was not there.

It was a bit of a knee jerk reaction due to the number of people overstating current LLM's capabilities here on HN.

Intelligence is the mitigation of uncertainty.

If you have a challenge (question) and the subject reduced the unknown qualities of that challenge in a deterministic way then some degree of intelligence has been achieved.

> Literally it’s a black box of statistics. Tokens in, Next token probabilities out

You just described an extremely accurate model of people.

[deleted]

You must not have been paying attention. There are people who think current LLMs are human-level AI, and those who don't think we will get there very soon (see the trillions of dollars poured into the AI craze, and see your own comment :)).

It's not unreasonable to say that the technology is being hyped up to a point that does not correspond do its current state, nor to its future possibilities.

There are people who think current LLMs are human-level AI

There are always a handful of people who hold pretty much any belief. But it seems to me that in your response here you are also conflating "Intelligence" and "Human level intelligence". And I will again insist that just because something hasn't yet reach that particular pinnacle ("human level intelligence") does not mean that it isn't "intelligent" to a point. And the original headline I was responding to did not say "AI isn't human level intelligent" it said "AI isn't intelligent". Those are two markedly different statements.

and those who don't think we will get there very soon

And in fact, we may or we may not. But that's not an inherently unreasonable position to hold, given what we know today.

It's not unreasonable to say that the technology is being hyped up to a point that does not correspond do its current state, nor to its future possibilities.

Sure, but that's a completely different statement than what we started with.

agree, hence the need to not anthropomorphise and remember there will be no AGI, just useful tools: https://medium.com/@fsndzomga/there-will-be-no-agi-d9be9af44...

I don't know for sure if there will be, or will not be, AGI eventually. But broadly speaking I agree with the sentiment that there will be "useful tools". And the AI's we have today are well into the "useful tool" category and display behavior that we would classify as "intelligent" if we knew nothing about the entity generating that behavior. I don't see how anybody can claim that that isn't intelligence (while acknowledging that these systems still fall short of full matching human intelligence in many ways).

It's like some people are committing a sort of "fallacy of the excluded middle" and making this overly binary: "it's either fully intelligent and completely equivalent to a human, or it's not intelligent at all". But that ignores all the middle ground between those extremes.

To take your analogy of the plane, we say the plane flies. We don't say the plane flies like a bird because that wouldn't be accurate. Similarly, we should say, for example, that LLMs summarize, generate text, retrieve potential answers to questions, and generate code—without adding the "like human" part, which only adds confusion. A calculator performs calculations, but it doesn't process calculations like humans do. We should focus on the utility and remember that these are just tools, not sentient beings.

How do you know AGI would not exist? Especially when we as humans exist, it is not theoretically impossible.

Intelligence is already difficult enough to evaluate in humans I don't see what lowering the bar with respect to computers will do.

The core problem is that, much like "consciousness", we don't really know what "intelligence" means. Different definitions are used for different purposes, but people tend to think everyone means the same thing when they use the term.

> ...straw-man argument.

Nobody is arguing that it's not intelligent because it isn't equal to human intelligence. The claim in the article is that there is simply no intelligence to speak of here.

> Of course AI is intelligent...

That's one way to make an argument. The author disagrees, as do I. I have seen no evidence whatsoever that these things can do anything truly novel or solve a problem. All I see is regurgitation.

"Mimicry of intelligence isn't intelligence" is a big assumption. It's like saying "fake it until you make it" doesn't work. It's like saying that two undistinguishable properties are nevertheless not equivalent.

Just for kicks: Claude 3.5’s opinion of this piece:

> This critique raises many valid concerns about the limitations, risks, and potential negative impacts of AI. It serves as an important counterpoint to overly optimistic or uncritical views of AI's potential. However, the critique may underestimate the potential for AI to evolve and overcome some current limitations. Additionally, while highlighting risks, it doesn't fully acknowledge potential benefits of AI in areas like scientific research, medical diagnostics, or improving efficiency in various fields.

Aircraft engineers studied birds to make airplanes.

If AI engineers don't study brains, they will probably never build an intelligent ai.

At least stimulate the brain of an ant or a small lizard, that shouldn't be hard to do.

Maybe try to do more things with primates or animals to teach them things.

I don't understand why cognitive sciences are not approached when dealing with ai, that seems obvious, but all I see is people viewing the brain like it's a computer with an algorithm.

Launching a bird shaped wood plank will never lead to flight.

Feels like we forgot that science is about understanding things. AI engineers don't analyse trained neural networks, they're black boxes. What's the point?

Maybe scientists are just bad at science.

There are so many questions to ask.

What current generation LLMs are doing is like being trained on a dataset of human dances and then the users somehow expect it to do more than replicate the dances it has already seen. It is supposed to replicate the internal brain state of a human just from seeing the dances, but if it ever came up with a dance that isn't in the dataset, it will be punished. Finally, people expect it to be intelligent, because humans are just dance move predictors and intelligence is equivalent to dance move prediction, it should now do something that it was explicitly punished not to do, i.e. come up with new dances.

[deleted]