Computers might be accurate but statistical models never were 100% accurate. That doesn't imply that no reasoning is happening. Humans get stuff wrong too but they certainly think and reason.

"Pattern matching" to me is another one of those vague terms like "thinking" and "knowing" that people decide LLMs do or don't do based on vibes.

Pattern matching has a definition in this field, it does mean specific things. We know machine learning has excelled at this in greater and greater capacities over the last decade

The other part of this is weighted filtering given a set of rules, which is a simple analogy to how AlphaGo did its thing.

Dismissing all this as vague is effectively doing the same thing as you are saying others do.

This technology has limits and despite what Altman says, we do know this, and we are exploring them, but it’s within its own confines. They’re fundamentally wholly understandable systems that work on a consistent level in terms of the how they do what they do (that is separate from the actual produced output)

I think reasoning, as any layman would use the term, is not accurate to what these systems do.

You're derailing the conversation. The discussion was about thinking, and now you're arguing about something entirely different and didn't even mention the word “think” a single time.

If you genuinely believe that anyone knows how LLMs work, how brains work, and/or how or why the latter does “thinking” while the former does not, you're just simply wrong. AI researchers fully acknowledge ignorance in this matter.

> Pattern matching has a definition in this field, it does mean specific things.

Such as?

> They’re fundamentally wholly understandable systems that work on a consistent level in terms of the how they do what they do (that is separate from the actual produced output)

Multi billion parameter models are definitely not wholly understandable and I don't think any AI researcher would claim otherwise. We can train them but we don't know how they work any more than we understand how the training data was made.

> I think reasoning, as any layman would use the term, is not accurate to what these systems do.

Based on what?

You’re welcoming to provide counters. I think these are all sufficiently common things that they stand on their own as to what I posit

Look, you're claiming something, it's up to you to back it up. Handwaving what any of these things mean isn't an argument.