You can always redefine "intelligent" so that humans meet the requirements but AIs don't.

A better model to use is this: LLMs possess a different type of intelligence than us, just like an intelligent alien species from another planet might.

A calculator has a very narrow sort of intelligence. It has near perfect capability in a subset of algebra with finite precision numbers, but that's it.

An old-school expert system has its own kind of intelligence, albeit brittle and limited to the scope of its pre-programmed if-then-else statements.

By extension, an AI chat bot has a type of intelligence too. Not the same as ours, but in many ways superior, just as how a calculator is superior to a human at basic numeric algebra. We make mistakes, the calculator does not. We make grammar and syntax errors all the time, the AI chat bots generally never do. We speak at most half a dozen languages fluently, the chat bots over a hundred. We're experts in at most a couple of fields of study, the chat bots have a very wide but shallow understanding. Etc.

Don't be so narrow minded! Start viewing all machines (and creatures) as having some type of intelligence instead of a boolean "have" or "have not" intelligence.

> A calculator has a very narrow sort of intelligence.

Have you ever heard anyone refer to a calculator as intelligent?

These companies have a vested interest in making the product appear more human/smart than it is. It's new tech smeared with the same ole marketing matter.

Would you say that a display and a printer are a perfect painter because they can render images? And a speaker is a very good musician because they can produce sound?

The LLM tasks is to produce a string of words according to an internal model trained on texts written by humans (and now generted by other LLMs). This is not intelligence.

Okay, but why isn't it "intelligence"? What part of the definition does it fail? What would convince you that you're wrong?

I wouldn’t say it’s a general definition, but the consensus (according to my opinion) is that intelligence is being able to define problems (not just experience them), discern the root cause, and then solve that.

Where it fails is generally the first step. It’s kinda like the old saying “you have to ask the right question”. In all problem solving matters, the definition of problem is the first step. It may not be the hardest (we have problems that are well defined, but unresolved), but not being able to do it is often a clear indication of not being able to do the rest.

> What would convince you that you're wrong?

Maybe when I can have the same interaction as with my fellow humans, where I can describe the issue (which is not the problem) and they can go solve it and provide either a sound plan to make the issue disappear. Issue here refer to unpleasantness or frustrating situation.

Until then, I see them as tools. Often to speed up my writing pace (generic code and generic presentation), or as a weird database where what goes in have a high probability to appear.