I think you're vastly underestimating how little of human intent is really encoded in language in a strict sense, and how much nontrivial inference of intents LLMs do every day with simple queries. This used to be an apparently insurmountable barrier in pre-LLM NLP, and now it is just not a problem.

Suppose I'm in a cold room, you're standing next to a heater, and I say "it's cold". Obviously my intent is that I want you to turn on the heater. But the literal semantics is just "the ambient temperature in the room is low" and it has nothing to do with heaters. Yet ChatGPT can easily figure out likely intent in situations like this, just as humans do, often so quickly and effortlessly that we don't notice the complexity of the calculation we did.

Or suppose I say to a bot "tell me how to brew a better cup of coffee". What is encoded in the literal meaning of the language here? Who's to say that "better" means "better tasting" as opposed to "greater quantity per unit input"? Or that by "cup of coffee" I mean the liquid drink, as opposed to a cup full of beans? Or perhaps a cup that is made out of coffee beans? In fact the literal meaning doesn't even make sense, as a "cup" is not something that is brewed, rather it is the coffee that should go into the cup, possibly via an intermediate pot.

If the bot only understands literal language then this kind of query is a complete nonstarter. And yet LLMs can handle these kinds of things easily. If anything they struggle more with understanding language itself than with inferring intent.

> Yet ChatGPT can easily figure out likely intent in situations like this, just as humans do

No, it is not "figuring out" anything, much less like a human might. Every time "I'm cold" appears in the training data, something else occurs after that. ChatGPT is a statistical model of what is most likely to follow "I'm cold" (and the other tokens preceding it) according to the data it has been trained on. It is not inferring anything, it is repeating the most common or one of the most common textual sequences that comes after another given textual sequence.

>it is repeating the most common...

This nonsense hasn't been true since GPT-2, and even before that it was a poor description.

For instance, do you think one just solves dozens of Erdős problems with the "most common textual sequence": https://github.com/teorth/erdosproblems/wiki/AI-contribution...

A slight oversimplification, as LLMs are also capable of generating the most statistically plausible textual sequence, which can be a sequence not found in the dataset but rather a synthesized combination of the likely sequences of multiple preceding sets of tokens, but yes, that is in fact what it is doing. Computer software does what it is programmed to do, and LLMs are not programmed to do logical inference in any capacity but rather operate entirely on probabilities learned from a mind-bogglingly large corpus of text (influenced by things like RLHF, which is still just massaging probabilities).

The claims about solving Erdos problems have been wildly overstated, and notably pushed by people who have a very large financial stake in hyping up LLMs. Nonetheless, I did not say that LLMs are useless. If they are trained on sufficient data, it should not be surprising that correct answers are probabilistically likely to occur. Like any computer software, that makes them a useful tool. It does not make them in any way intelligent, any more than a calculator would be considered intelligent despite being completely superior to human intelligence in accomplishing their given task.

>not programmed to do logical inference in any capacity

Yet have no problem doing so when solving Erdős problems. This isn't up for debate at this point.

>The claims about solving Erdos problems have been wildly overstated

These are verified solutions. They exist, are not trivial, and are of obvious interest to the math community. Take it up with Terence Tao and co.

>pushed by people who have a very large financial stake in hyping up LLMs

Libel.

>It does not make them in any way intelligent

Word games.

Honestly big noobquestion: isn't math just very very nested patternmatching based on a few foundational operators? ive always felt, that im bad at math, cause i forget all the rules, but seeing solutions (and knowing the used pattern) always made "sense".

I always thought the hard math problems are so deeply nested or you have to remember trick xyz that people just didnt think about it yet..

The amount of mathematical structures and transformations you can apply (the possible rules) is effectively infinite. Simply remembering the rules might work at first, but you'll soon run into the combinatorial explosion: https://en.wikipedia.org/wiki/Combinatorial_explosion

You could go a step further, and simply say "well, ok, then the LLMs are merely doing some form of incremental/heuristic search!". Yes, but at that point you'd also be hard-pressed to claim that humans themselves are doing anything beyond that. You run out of naturalistic explanations.

> This isn't up for debate at this point.

If by not up for debate, you mean that it is delusional and literally evidence of psychosis to suggest that computer software is doing something it is not programmed to do, you would be correct. Probabilistic analysis can carry you very, very far in doing something that looks like logical inference at the surface level, but it is nonetheless not logical inference. LLM models have been getting increasingly good at factoring in larger and longer contexts and still managing to generate plausibly correct answers, becoming more and more useful all the while, but are still not capable of logical inference. This is why your genius mathematician AGI consciousness stumbles on trivial logic puzzles it has not seen before like the car wash meme.

>delusional and literally evidence of psychosis to suggest that computer software is doing something it is not programmed to do

These are just insults and outright lies, and you know that. We're done here.

AI progress from here on out will be extra sweet.

You don't have the ability to predict progress, either.

Well, I'm not clairvoyant, but this is a very easy prediction to make. And we're not talking about decades in the future, this is simply a matter of letting the near-future unfold.

[deleted]

The LLMs are doing this via chat, not by physically standing in a room inferring context. You have to prompt the LLM that you're in a room next to someone saying it's cold, the most likely answer being a desire to have temperature turned up. Of course that won't always be the case. Could be an inside joke, could be a comment with no intent to have the heat adjusted, could be a room where the heat can't be adjusted, could be a reference to someone's personality bringing down the temperature so to speak.

Precisely.. this is what the bozo AI-accelerants don't understand.

What LLM's are is almost like a hacked-means of intuition. Its very impressive no doubt. But ultimately it isn't even close to what the well-trained human can infer at lightning speed when combined with intuition.

The LLM producers really ought to accept their existing investments are ultimately not going to yield the returns necessary for a viable self-sustaining business when accounting for future reinvestment needs, and instead move their focus towards understanding how to marry the human and LLM technology. Anthropic has been better on this front of course. OAI though? Complete diasaster.

> it isn't even close to what the well-trained human can infer at lightning speed when combined with intuition.

It's a lot closer to that than anything was five years ago. Do you really think we're going to be interacting with them the same way five years from now?

I know what you're getting at but those examples are reaching

it’s cold -> turn on the heater

I’d never just turn on the heater silently if someone said this to me. I think it means something else.

If someone just said "it's cold" then yeah that's kinda toxic.

If they said "turn on the heater" then you have no ambiguity