No such thing as an expert consensus on anything about LLMs these days, just different forms of grift.
My point is, the question if an LLM reasons the same way a human does is about as useful as "does a submarine swim" or "can a telephone talk". The results speak for themselves.
Well yes the corporate accelerationists are certainly pushing for it the most, shoehorning the tech into things it doesn't belong in to see if they can somehow come up on top, which that in turn makes lots of people resentful towards it in a reactionary way.
You have artists who've lost work due to diffusion models, teachers who can't assign homework essays anymore, people who hate Microsoft Copilot, just anyone not wanting to be replaced by a bot or being forced to use the tech to avoid being outcompeted, people set in their ways who don't want change or imagine it being destructive, etc. It's a large crowd that one can appeal to for personal gain, politics 101. Anyone with half believable credentials can go on a talk show and say the things people want to hear, maybe sell a book or two afterwards.
Are today's models on the brink of some exponential self perpetuating shot towards superintelligence? Obviously not. Are they overhyped glorified lookup tables? Also no. Are there problems? Definitely. But I don't think it's entirely fair to dismiss a tech based on someone misappropriating it in monopolistic endeavours instead of directing dismissal towards those people themselves.
Like, similar to how Elon's douchebaggery has tainted EVs for lots of people for no practical reason, the same has Altman's done for LLMs.
Idk, before this people from your camp were saying LLMs can't even understand anything. Always moving the goalposts. Then it'll be they can't feel or can't something else just to be pointlessly contrarian. Anyway, wrong idea.
There have been enough cases of models providing novel results that it's clear that whatever human trait they supposedly lack they don't really need. A car does not need legs, it does things differently. Having legs would even be a major detriment and would hold it back from achieving its top performance.
That's what those brain simulating projects are conceptually btw: cars with legs or planes with flapping wings. That's why they all fail, the approach makes no sense.
If LLMs could reason, they would flourish in barely understood topics, they dont. They repeat after what humans already said over and over again all across the training data. They are a parrot, its really not that hard to understand.
Is it really much different from most people? The average Joe doesn't produce novel theories every day - he just rehashes what he's heard. Now the new goalpost seems to be that we can only say an LLM can "reason" if it matches Fields Medalists.
> Is it really much different from most people? The average Joe doesn't produce novel theories every day"
You've presented a false choice.
However the average Joe does indeed produce unique and novel thoughts every day. If it were not the case he would be brain dead. Each decision - wearing blue or red today - every tiny thought, action, feeling, indecision, crisis, or change of heart these are just as important.
The jury maybe out on how to judge what 'thought' actually is. However what it is not is perhaps easier to perceive. My digital thermometer does not think when it tells me the temperature.
My paper and pen version of the latest LLM (quite a large bit of paper and certainly a lot of ink I might add) also does not think.
I am surprised so many in the HN community have so quickly taken to assuming as fact that LLM's think or reason. Even anthropomorphising LLM's to this end.
For a group inclined to quickly calling out 'God of the gaps' they have quite quickly invented their very own 'emergence'.
What is "novel results"? A random UUID generator also gives "novel result", every time.
Even if we're to humor the "novel" part, have they actually come up with anything truly novel? New physics? New proofs of hard math problems that didn't exist before?
Yes, exactly. There are other papers, but Google proved it most definitively imo [0], an LLM now holds the state of the art for the lowest bound on a very specific graph problem.
That's not an LLM. AlphaEvolve is a variant of genetic search for program synthesis. Very different from the chat bot that answers questions about ingrown toenails.
No such thing as an expert consensus on anything about LLMs these days, just different forms of grift.
My point is, the question if an LLM reasons the same way a human does is about as useful as "does a submarine swim" or "can a telephone talk". The results speak for themselves.
> just different forms of grift
That sounds like a false "both sides"-ing.
It's not symmetrical, there's a lot more money (and potential to grift) hyping things up as miracle machines.
In contrast, most of the pessimists don't have a discernible profit motive.
Well yes the corporate accelerationists are certainly pushing for it the most, shoehorning the tech into things it doesn't belong in to see if they can somehow come up on top, which that in turn makes lots of people resentful towards it in a reactionary way.
You have artists who've lost work due to diffusion models, teachers who can't assign homework essays anymore, people who hate Microsoft Copilot, just anyone not wanting to be replaced by a bot or being forced to use the tech to avoid being outcompeted, people set in their ways who don't want change or imagine it being destructive, etc. It's a large crowd that one can appeal to for personal gain, politics 101. Anyone with half believable credentials can go on a talk show and say the things people want to hear, maybe sell a book or two afterwards.
Are today's models on the brink of some exponential self perpetuating shot towards superintelligence? Obviously not. Are they overhyped glorified lookup tables? Also no. Are there problems? Definitely. But I don't think it's entirely fair to dismiss a tech based on someone misappropriating it in monopolistic endeavours instead of directing dismissal towards those people themselves.
Like, similar to how Elon's douchebaggery has tainted EVs for lots of people for no practical reason, the same has Altman's done for LLMs.
LLMs do not reason. Not hard to understand.
Idk, before this people from your camp were saying LLMs can't even understand anything. Always moving the goalposts. Then it'll be they can't feel or can't something else just to be pointlessly contrarian. Anyway, wrong idea.
There have been enough cases of models providing novel results that it's clear that whatever human trait they supposedly lack they don't really need. A car does not need legs, it does things differently. Having legs would even be a major detriment and would hold it back from achieving its top performance.
That's what those brain simulating projects are conceptually btw: cars with legs or planes with flapping wings. That's why they all fail, the approach makes no sense.
This will be the exact same argument in 20 years when we’ve got examples of robots that some fraction of people claim are conscious.
If LLMs could reason, they would flourish in barely understood topics, they dont. They repeat after what humans already said over and over again all across the training data. They are a parrot, its really not that hard to understand.
> They are a parrot
Those are some mighty parrots there, if they managed to get gold at IMO, IoI, and so on...
Well understood topics... what's so hard to understand?
>They repeat after what humans already said
>They are a parrot
Is it really much different from most people? The average Joe doesn't produce novel theories every day - he just rehashes what he's heard. Now the new goalpost seems to be that we can only say an LLM can "reason" if it matches Fields Medalists.
> Is it really much different from most people? The average Joe doesn't produce novel theories every day"
You've presented a false choice.
However the average Joe does indeed produce unique and novel thoughts every day. If it were not the case he would be brain dead. Each decision - wearing blue or red today - every tiny thought, action, feeling, indecision, crisis, or change of heart these are just as important.
The jury maybe out on how to judge what 'thought' actually is. However what it is not is perhaps easier to perceive. My digital thermometer does not think when it tells me the temperature.
My paper and pen version of the latest LLM (quite a large bit of paper and certainly a lot of ink I might add) also does not think.
I am surprised so many in the HN community have so quickly taken to assuming as fact that LLM's think or reason. Even anthropomorphising LLM's to this end.
For a group inclined to quickly calling out 'God of the gaps' they have quite quickly invented their very own 'emergence'.
What is "novel results"? A random UUID generator also gives "novel result", every time.
Even if we're to humor the "novel" part, have they actually come up with anything truly novel? New physics? New proofs of hard math problems that didn't exist before?
Yes, exactly. There are other papers, but Google proved it most definitively imo [0], an LLM now holds the state of the art for the lowest bound on a very specific graph problem.
[0] https://research.google/blog/ai-as-a-research-partner-advanc...
That's not an LLM. AlphaEvolve is a variant of genetic search for program synthesis. Very different from the chat bot that answers questions about ingrown toenails.