> we still see claims that LLMs are "just next token predictors" and "just regurgitate code they read online". These are just uninformed and wrong views. It's fair to say that these people were (are!) wrong.
I don't think it's fair to say that at all. How are LLMs not statistical models that predict tokens? It's a big oversimplification but it doesn't seem wrong, the same way that "computers are electricity running through circuits" isn't a wrong statement. And in both cases, those statements are orthogonal to how useful they are.
It is just a tell that the person believes LLMs are more than what they are ontologically.
No one says "computers are JUST electricity running through circuits" because no one tries to argue the computer itself is "thinking" or has some kind of being. No one tries to argue that when you put the computer to sleep it is actually doing a form of "sleeping".
The mighty token though produces all kinds of confused nonsense.
> How are LLMs not statistical models that predict tokens?
there's LLMs as in "the blob of coefficients and graph operations that runs on a gpu whenever there's an inference" which is absolutely "a statistical model that predict tokens" and LLMs as in "the online apps that iterates and have access to an entire automated linux environment that can run $LANGUAGE scripts and do web queries when an intermediary statistical output contains too much maybes and use the result to drive further inference.".
> I don't think it's fair to say that at all. How are LLMs not statistical models that predict tokens? It's a big oversimplification but it doesn't seem wrong
Modern LLMs are trained via reinforcement learning where the training objective is no longer maximum next token probability.
They still produce tokens sequentially (ignoring diffusion models for now) but since the objective is so different thinking of them as next token predictors is more wrong than right.
Instead one has to think of them as trying to fit their entire output to the model learnt in the reinforcement phase. That's how reasoning in LLMs works so well.
It's wrong because it’s deliberately used to mischaracterize the current abilities of AI. Technically it's not wrong but the context of usage in basically every case is that the person saying it is deliberately trying to use the concept to downplay AI as just a pattern matching machine.
I'm a bit confused. You say it's wrong, but then later say it's not wrong, and just because it can be used to downplay advancements in AI doesn't mean that it's wrong and saying it's wrong because it can be used that way is a bit disingenuous.
[flagged]
You don't understand the meaning of "technically". Also, don't use inflammatory language.
P.S. The response is filled with bad faith accusations.
Look at your response. You first dismissed me completely by saying I don’t know what technically means. Then you mischaracterization my statement as an intent to inflame. These are highly insulting and dismissive statements.
You’re not willing to have good faith discussion. You took the worst possible interpretation of my statement and crafted a terse response to shut me down. I only did two things. First I explained myself… then I called you out for what you did while remaining civil. I don’t skirt around HN rules as a means to an end, which is what I believe you’re doing? I’m ok with what you’re doing… but I will call it out.
No surprise that the dishonesty and playing the victim is persistent. It's a fact that this person misuses the term "technically", and that they used inflammatory language. Saying so does not dismiss them completely ... but even if it did, so what? Doing so is not bad faith. No one has any obligation to engage with someone. I won't comment further.
Indeed, don’t comment further: you didn’t even have the respect to respond to me directly. That is categorically deliberately inflammatory. Just respond to the guy you’re talking to like 99% of HN. Why avoid it? It’s a tactic, that’s why, and also pointless.
I’m not a victim of anything. But you are definitely a perpetrator and instigator.
I am not using inflammatory language to hurt anyone. I am illustrating a point on the contrast between technical meaning and non-technical meanings. One meaning is offensive the other meaning is technically correct. Don't start a witch hunt by deliberately misinterpreting what I'm saying.
So technical means something like this: in a technical sense you are a stochastic parrot. You are also technically an object. But in everyday language we don't call people stochastic parrots or objects because language is nuanced and the technical meaning is rarely used at face value and other meanings are used in place of the technical one.
So when people use a term in conversation and go by the technical meaning it's usually either very strange or done deliberately to deceive. Sort of like how you claim you don't know what "technically" means and sort of how you deliberately misinterpreted my words as "inflammatory" when I did nothing of the sort.
I hope you learned something basic about the English today! Good day to you sir!
> a technical sense you are a stochastic parrot.
I am not. I'm sorry you feel this way about yourself. you are more than a next token predictor
If I am more than a next token predictor… doesn’t that mean I’m a next token predictor + more? Do you not predict the next word you’re going to say? Of course you do, you do that and more.
Humans ARE next token predictors technically and we are also more than that. That is why calling someone a next token predictor is a mischaracterization. I think we are in agreement you just didn’t fully understand my point.
But the claim for LLMs are next token predictors is the SAME mischaracterization. LLMs are clearly more than next token predictors. Don’t get me wrong LLMs aren’t human… but they are clearly more than just a next token predictor.
The whole point of my post is to point out how the term stochastic parrot is weaponized to dismiss LLMs and mischaracterize and hide the current abilities of AI. The parent OP was using the technical definition as an excuse to use the word as a means to achieve his own ends namely be “against” AI. It’s a pathetic excuse I think it’s clear the LLM has moved beyond a stochastic parrot and there’s just a few stragglers left who can’t see that AI is more than that.
You can be “against” AI, that’s fine but don’t mischaracterize it… argue and make your points honestly and in good faith. Using the term stochastic parrot and even what the other poster did in attempt to accuse me of inflammatory behavior is just tactics and manipulation.