> They seem smart, but they are not; they are really just good at appearing to be smart.

Can you give an example of the difference between these two things?

Imagine an actor who is playing a character speaking a language that they actor does not speak. Due to a lack of time, the actor decides against actually learning the language and instead opts to just memorise and train how to speak their lines without actually understanding the content. Let's assume they are doing a pretty convincing job too. Now, the audience watching these scenes may think that the actor is actually speaking the language, but in reality they are just mimicking.

This is what an LLM essentially is. It is good at mimicking, reproducing and recombining the things it was trained on. But it has no creativity to go beyond this, and it doesn't even possess true reasoning, which is how it will end up making mistakes that are just immediately obvious to a human observer, yet the LLM is unable to see them, because it just mimicking.

> Imagine an actor who is playing a character speaking a language that they actor does not speak. Due to a lack of time, the actor decides against actually learning the language and instead opts to just memorise and train how to speak their lines without actually understanding the content.

Now imagine that, during the interval, you approach the actor backstage and initiate a conversation in that language. His responses are always grammatical, always relevant to what you said modulo ambiguity, largely coherent, and accurate more often than not. You'll quickly realise that 'actor who merely memorized lines in a language he doesn't speak' does not describe this person.

You've missed the point of the example, of course it's not the exact same thing. With regard to LLM, the biggest difference is that it's a regression against the world's knowledge, like an actor who memorized every question that happens to have an answer written down in history. If you give him a novel question, he'll look at similar questions and just hallucinate a mashup of the answers hoping it makes sense, even though he has no idea what he's telling you. That's why LLMs do things like make up nonsensical API calls when writing code that seem right but have no basis in reality. It has no idea what it's doing, it's just trying to regress code in its knowledge base to match your query.

I don't think I missed the point; my point is that LLMs do something more complex and far more effective than memorise->regurgitate, and so the original analogy doesn't shed any light. This actor has read billions of plays and learned many of the underlying patterns, which allows him to come up with novel and (often) sensible responses when he is forced to improvise.

> LLMs do something more complex and far more effective than memorise-regurgitate

They literally do not, what are you talking about?

What kind of training data do you suppose contains an answer to "how to build a submarine out of spaghetti on Mars" ? What do you think memorization means?

https://chatgpt.com/s/t_6942e03a42b481919092d4751e3d808e

You are describing Searle's "Chinese Room argument"[1] to some extent.

It's been discussed a lot recently, but anyone who has interacted with LLMs at a deeper level will tell you that there is something there; not sure if you'd call it "intelligence" or what. There is plenty of evidence to the contrary too. I guess this is a long-winded way of saying "we don't really know what's going on"...

[1] https://plato.stanford.edu/entries/chinese-room/

If an LLM was intelligent, wouldn't it get bored?

Why should it?

1. I would argue that an actor performing in this way does actually understand what his character means

2. Why doesn't this apply to you from my perspective?

Being able to learn to play Moonlight Sonata vs. being able to create it. Being able to write a video game vs being able to write a video game that sells. Being able to tell you newtons equations vs being able to discover the acceleration of gravity on earth

So if an LLM could do any of those things you would consider it very smart?

Wisdom vs knowledge, where the word "knowledge" is doing a lot of work. LLMs don't "know" anything, they predict the next token that has the aesthetics of a response the prompter wants.

I suspect a lot of people but especially nerdy folks might mix up knowledge and intelligence, because they've been told "you know so much stuff, you are very smart!"

And so when they interact with a bot that knows everything, they associate it with smart.

Plus we anthropomorphise a lot.

Is Wikipedia "smart"?

What is the definition of intelligence?

Ability to create an internal model of the world and run simulations/predictions on it in order to optimize the actions that lead to a goal. Bigger, more detailed models and more accurate prediction power are more intelligent.

How do you know if something is creating an internal model of the world?

Look at the physical implementation of how it computes.

So you are making the determination based on the method, not on the outcome.

Did I ever promise otherwise? Intelligence is inherently computational, and needs a physical substrate. You can understand it both by interacting with the black box and opening up the box.

Definitely not _only_ knowledge.

Right, so a dictionary isn't intelligent. Is a dog intelligent?

It doesn't seem obvious to me that predicting a token that is the answer to a question someone asked would require anything less than coming up with that answer via another method.

Hallucinating things that never exist?

Imagination?

I think these are clearly two different words that mean different things.

Yet they are correlated and confused in part.

[flagged]