This is a meme that needs to die. It's not insightful or interesting and just muddies the waters in a way that confuses the issue for everybody.
Look, nobody (for the most part) is claiming that current AI systems are "there yet" as far as being fully equal to human intelligence. That's what makes this whole argument useless... it's basically a straw-man argument.
OTOH, saying that artificial intelligence systems aren't "intelligent" to a point because they don't do exactly what humans do strikes me as roughly equivalent to saying that "airplanes don't really fly because they don't flap their wings like birds. They're just mimicking actual flight."
Of course AI is intelligent, it just isn't done developing yet. An apt comparison might be a precocious (and something peculiar, in this case) child.
And as an additional side-note: this article seems to conflate "AI" and "Generative AI" / LLM's as being the same thing. But that's not right - Generative AI / LLM's are just a subset of AI techniques and technologies that exist. Yes, GenAI/LLM is the current "new hotness" and the trendy thing everybody is talking about, but that doesn't excuse completely ignoring the distinction between "all of AI" and "Generative AI".
How are LLMs intelligent? Point me to the intelligence?
Literally it’s a black box of statistics. Tokens in, Next token probabilities out
Literally it’s a black box of statistics. Tokens in, Next token probabilities out
You literally just repeated the exact same mistake, albeit in different words. This is tantamount to speaking of airplanes: "It's not flight, it's just a bunch of fluid mechanics". You're focusing on the internal implementation details, not the expressed behavior.
I don't know if people do this out of some sort of "biological chauvinism" or what, but it strikes me as rather odd.
I can (to pick one random example out of many) ask ChatGPT "Who is the current Prime Minister of Japan and what is the largest prime number smaller than their age?" and get a correct answer. To say that a system that can pull that feat off isn't intelligent boggles the imagination IMO. Token prediction? Who cares. Hallucinations? Humans hallucinate also. Perhaps less frequently, but all that says is that we aren't "all the way there" yet with AI.
LLM is intelligent like a human, book, cat, thermostat, or grasshopper is intelligent.
But it's not a human intelligence, and it's creepy when its creators dress it up to pretend it is. It's an extremely superficial imitation of a human, and it doesn't behave like a human. It's like when a human furry claims to really be a dog or an otherkin.
Humans aren't the only intelligence though. Just because we "dress it up" to, well, interact with us humans doesn't mean it's not intelligent. I mean, what else would you expect them to do, make it emit binary?
"LLM is intelligent like a human, but it's not, and don't pretend it is"
is probably the best summary I've read on the topic
The conspiracist in me believes that they want people to think it’s human like and actually sentient so they can argue that what they’re doing is fair use. It’s just learning like humans learn!
As you said, it boggles imagination, because it feels like chatting with a human. But "having knowledge" does not equal "be intelligent". Would you say a dictionary is intelligent? GPT is just a human-friendly way to access, condensate and present to you information already present on the web. It saves you hours of web search and good old notes taking. That's cool, but that's not intelligence.
How are humans intelligent? Point me to the intelligence?
Literally they're just meat. Sensory inputs in, electrical signals out.
How is the I Ching intelligent, etc.
Point me to human intelligence. Not a mimicry of intelligence, actual intelligence. No, solipsism doesn't count.
Humans learn dynamically and can apply patterns to problems across domains. I think these are a couple key capabilities we have that LLM's don't have (for example).
>Humas learn dynamically
In-context learning (Few Shot Prompting) is absolutely demonstrable by LLMs since GPT 3.5 at least.
Yes, ICL demonstrates some capability, but is still extremely limited compared to a human's ability. It's an interesting baby step at this point.
Why do you think only LLMs are the pinnacle of AI today?
> Why do you think only LLMs are the pinnacle of AI today?
I don't think that but you're right that I was partially countering a pro-LLM argument that was not there.
It was a bit of a knee jerk reaction due to the number of people overstating current LLM's capabilities here on HN.
Intelligence is the mitigation of uncertainty.
If you have a challenge (question) and the subject reduced the unknown qualities of that challenge in a deterministic way then some degree of intelligence has been achieved.
> Literally it’s a black box of statistics. Tokens in, Next token probabilities out
You just described an extremely accurate model of people.
You must not have been paying attention. There are people who think current LLMs are human-level AI, and those who don't think we will get there very soon (see the trillions of dollars poured into the AI craze, and see your own comment :)).
It's not unreasonable to say that the technology is being hyped up to a point that does not correspond do its current state, nor to its future possibilities.
There are people who think current LLMs are human-level AI
There are always a handful of people who hold pretty much any belief. But it seems to me that in your response here you are also conflating "Intelligence" and "Human level intelligence". And I will again insist that just because something hasn't yet reach that particular pinnacle ("human level intelligence") does not mean that it isn't "intelligent" to a point. And the original headline I was responding to did not say "AI isn't human level intelligent" it said "AI isn't intelligent". Those are two markedly different statements.
and those who don't think we will get there very soon
And in fact, we may or we may not. But that's not an inherently unreasonable position to hold, given what we know today.
It's not unreasonable to say that the technology is being hyped up to a point that does not correspond do its current state, nor to its future possibilities.
Sure, but that's a completely different statement than what we started with.
agree, hence the need to not anthropomorphise and remember there will be no AGI, just useful tools: https://medium.com/@fsndzomga/there-will-be-no-agi-d9be9af44...
I don't know for sure if there will be, or will not be, AGI eventually. But broadly speaking I agree with the sentiment that there will be "useful tools". And the AI's we have today are well into the "useful tool" category and display behavior that we would classify as "intelligent" if we knew nothing about the entity generating that behavior. I don't see how anybody can claim that that isn't intelligence (while acknowledging that these systems still fall short of full matching human intelligence in many ways).
It's like some people are committing a sort of "fallacy of the excluded middle" and making this overly binary: "it's either fully intelligent and completely equivalent to a human, or it's not intelligent at all". But that ignores all the middle ground between those extremes.
To take your analogy of the plane, we say the plane flies. We don't say the plane flies like a bird because that wouldn't be accurate. Similarly, we should say, for example, that LLMs summarize, generate text, retrieve potential answers to questions, and generate code—without adding the "like human" part, which only adds confusion. A calculator performs calculations, but it doesn't process calculations like humans do. We should focus on the utility and remember that these are just tools, not sentient beings.
How do you know AGI would not exist? Especially when we as humans exist, it is not theoretically impossible.
Intelligence is already difficult enough to evaluate in humans I don't see what lowering the bar with respect to computers will do.
The core problem is that, much like "consciousness", we don't really know what "intelligence" means. Different definitions are used for different purposes, but people tend to think everyone means the same thing when they use the term.
> ...straw-man argument.
Nobody is arguing that it's not intelligent because it isn't equal to human intelligence. The claim in the article is that there is simply no intelligence to speak of here.
> Of course AI is intelligent...
That's one way to make an argument. The author disagrees, as do I. I have seen no evidence whatsoever that these things can do anything truly novel or solve a problem. All I see is regurgitation.