The simplest system that acts entirely like a human is a human.

An LLM base model isn't trained for abstract thinking, but it still ends up developing abstract thinking internally - because that's the easiest way for it to mimic the breadth and depth of the training data. All LLMs operate in abstracts, using the same manner of informal reasoning as humans do. Even the mistakes they make are amusingly humanlike.

There's no part of an LLM that's called a "mind", but it has a "forward pass", which is quite similar in function. An LLM reasons in small slices - elevating its input text to a highly abstract representation, and then reducing it back down to a token prediction logit, one token at a time.

It doesn’t develop any thinking, it’s just predicting tokens based on a statistical model.

This has been demonstrated so many times.

They don’t make mistakes. It doesn’t make any sense to claim they do because their goal is simply to produce a statistically likely output. Whether or not that output is correct outside of their universe is not relevant.

What you’re doing is anthropomorphizing them and then trying to explain your observations in that context. The problem is that doesn’t make any sense.

When you reach into a "statistical model" and find that it has generalized abstracts like "deceptive behavior", or "code error"? Abstracts that you can intentionally activate or deactivate - making an AI act as if 3+5 would return a code error, or as if dividing by zero wouldn't? That's abstract thinking.

Those are real examples of the kind of thing that can be found in modern production grade AIs. Not "anthropomorphizing" means not understanding how modern AI operates at all.

I don't think you have any idea what you're talking about at all.

You've clearly read a lot of social media content about AI, but have you ever read any philosophy?

Almost all philosophy is incredibly worthless in general, and especially in application to AI tech.

Anything that actually works and is in any way useful is removed from philosophy and gets its own field. So philosophy is left as, largely, a collection of curios and failures.

Also, I would advise you to never discuss philosophy with an LLM. It might be a legitimate cognitohazard.

How exactly do you presume to make an argument about thought and whether or an LLM exhibits genuine thought and intelligence without philosophy?

Not to mention the effect of formal logic in computer science

By comparing measurable performance metrics and examining what little we know of the internal representations.

If you don't have anything measurable, you don't have anything at all. And philosophy doesn't deal in measurables.

How do you know what is, isn't, could be, or couldn't be measurable?

You're not being serious.

> The simplest system that acts entirely like a human is a human.

LLM's do not act entirely like a human. If they did, we'd be celebrating AGI!

They merely act sort of like a human. Which is entirely expected - given that the datasets they're trained on only capture some facets of human behavior.

Don't expect them to show mastery of spatial reasoning or agentic behavior or physical dexterity out of the box.

They still capture enough humanlike behavior to yield the most general AI systems ever built.