By thinking about what a computer is actually doing & realizing that attributing thought to an arthmetic gadget leads to all sorts of nonsensical consequences like an arrangement of dominoes & their cascade being a thought. The metaphysics of thinking computers is incoherent & if you study computability theory you'll reach the same conclusion.
I've already explained it in several places. The burden of proof is on those drawing the equivalence to provide actual evidence for why they believe carbon & silicon are interchangeable & why substrate independence is a valid assumption. I have studied this problem for much longer than many people commenting on this issue & I am telling you that your position is metaphysically incoherent.
Moot point? As far as I know, it’s still intensely debated, and there are some excellent papers out there providing evidence that LLMs truly are just statistical prediction machines. It’s far from an unreasonable position.
No such thing as an expert consensus on anything about LLMs these days, just different forms of grift.
My point is, the question if an LLM reasons the same way a human does is about as useful as "does a submarine swim" or "can a telephone talk". The results speak for themselves.
Well yes the corporate accelerationists are certainly pushing for it the most, shoehorning the tech into things it doesn't belong in to see if they can somehow come up on top, which that in turn makes lots of people resentful towards it in a reactionary way.
You have artists who've lost work due to diffusion models, teachers who can't assign homework essays anymore, people who hate Microsoft Copilot, just anyone not wanting to be replaced by a bot or being forced to use the tech to avoid being outcompeted, people set in their ways who don't want change or imagine it being destructive, etc. It's a large crowd that one can appeal to for personal gain, politics 101. Anyone with half believable credentials can go on a talk show and say the things people want to hear, maybe sell a book or two afterwards.
Are today's models on the brink of some exponential self perpetuating shot towards superintelligence? Obviously not. Are they overhyped glorified lookup tables? Also no. Are there problems? Definitely. But I don't think it's entirely fair to dismiss a tech based on someone misappropriating it in monopolistic endeavours instead of directing dismissal towards those people themselves.
Like, similar to how Elon's douchebaggery has tainted EVs for lots of people for no practical reason, the same has Altman's done for LLMs.
Idk, before this people from your camp were saying LLMs can't even understand anything. Always moving the goalposts. Then it'll be they can't feel or can't something else just to be pointlessly contrarian. Anyway, wrong idea.
There have been enough cases of models providing novel results that it's clear that whatever human trait they supposedly lack they don't really need. A car does not need legs, it does things differently. Having legs would even be a major detriment and would hold it back from achieving its top performance.
That's what those brain simulating projects are conceptually btw: cars with legs or planes with flapping wings. That's why they all fail, the approach makes no sense.
If LLMs could reason, they would flourish in barely understood topics, they dont. They repeat after what humans already said over and over again all across the training data. They are a parrot, its really not that hard to understand.
Is it really much different from most people? The average Joe doesn't produce novel theories every day - he just rehashes what he's heard. Now the new goalpost seems to be that we can only say an LLM can "reason" if it matches Fields Medalists.
> Is it really much different from most people? The average Joe doesn't produce novel theories every day"
You've presented a false choice.
However the average Joe does indeed produce unique and novel thoughts every day. If it were not the case he would be brain dead. Each decision - wearing blue or red today - every tiny thought, action, feeling, indecision, crisis, or change of heart these are just as important.
The jury maybe out on how to judge what 'thought' actually is. However what it is not is perhaps easier to perceive. My digital thermometer does not think when it tells me the temperature.
My paper and pen version of the latest LLM (quite a large bit of paper and certainly a lot of ink I might add) also does not think.
I am surprised so many in the HN community have so quickly taken to assuming as fact that LLM's think or reason. Even anthropomorphising LLM's to this end.
For a group inclined to quickly calling out 'God of the gaps' they have quite quickly invented their very own 'emergence'.
What is "novel results"? A random UUID generator also gives "novel result", every time.
Even if we're to humor the "novel" part, have they actually come up with anything truly novel? New physics? New proofs of hard math problems that didn't exist before?
Yes, exactly. There are other papers, but Google proved it most definitively imo [0], an LLM now holds the state of the art for the lowest bound on a very specific graph problem.
That's not an LLM. AlphaEvolve is a variant of genetic search for program synthesis. Very different from the chat bot that answers questions about ingrown toenails.
The normative importance of a fact may increase when more number of people start willfully ignoring it for shorter-term profit.
Imagine somebody in 2007: "It's so funny to me that people are still adamant about mortgage default risk after it's become a completely moot point because nobody cares in this housing market."
That’s nailing it really well: “willfully ignoring” is precisely what’s happening all around me. Me talking about small focused AI models, there you have everyone raving about AGI.
Energy use and privacy issues of cloud vs local inference discussions end on how awesome the power of GPUs are and the jobs too.
GPU backed finance with depreciation schedules past useful life seems OK for anyone chasing some short term gain.
Even the job market is troubled, you can hardly tell a relevant candidate from an irrelevant one because everyone is an AI expert these days - hallucinations seem to make lying more casual.
It’s pretty clear to me there is a collective desire to ignore the problems to sell more GPU, close the next round, get that high paying AI job.
Part of me wishes humans would show the same dedication to fight climate change…
How can you know?
By thinking about what a computer is actually doing & realizing that attributing thought to an arthmetic gadget leads to all sorts of nonsensical consequences like an arrangement of dominoes & their cascade being a thought. The metaphysics of thinking computers is incoherent & if you study computability theory you'll reach the same conclusion.
I'd say that thoughts and reasoning are two different things, you're moving the goalpost.
But what makes the computer hardware fundamentally incompatible with thinking? Compared to a brain
I've already explained it in several places. The burden of proof is on those drawing the equivalence to provide actual evidence for why they believe carbon & silicon are interchangeable & why substrate independence is a valid assumption. I have studied this problem for much longer than many people commenting on this issue & I am telling you that your position is metaphysically incoherent.
It's so funny to me that people are still adamant about this like two years after it's become a completely moot point.
Moot point? As far as I know, it’s still intensely debated, and there are some excellent papers out there providing evidence that LLMs truly are just statistical prediction machines. It’s far from an unreasonable position.
Experts are adamant about this. Just take a look at https://youtu.be/iRqpsCHqLUI
No such thing as an expert consensus on anything about LLMs these days, just different forms of grift.
My point is, the question if an LLM reasons the same way a human does is about as useful as "does a submarine swim" or "can a telephone talk". The results speak for themselves.
> just different forms of grift
That sounds like a false "both sides"-ing.
It's not symmetrical, there's a lot more money (and potential to grift) hyping things up as miracle machines.
In contrast, most of the pessimists don't have a discernible profit motive.
Well yes the corporate accelerationists are certainly pushing for it the most, shoehorning the tech into things it doesn't belong in to see if they can somehow come up on top, which that in turn makes lots of people resentful towards it in a reactionary way.
You have artists who've lost work due to diffusion models, teachers who can't assign homework essays anymore, people who hate Microsoft Copilot, just anyone not wanting to be replaced by a bot or being forced to use the tech to avoid being outcompeted, people set in their ways who don't want change or imagine it being destructive, etc. It's a large crowd that one can appeal to for personal gain, politics 101. Anyone with half believable credentials can go on a talk show and say the things people want to hear, maybe sell a book or two afterwards.
Are today's models on the brink of some exponential self perpetuating shot towards superintelligence? Obviously not. Are they overhyped glorified lookup tables? Also no. Are there problems? Definitely. But I don't think it's entirely fair to dismiss a tech based on someone misappropriating it in monopolistic endeavours instead of directing dismissal towards those people themselves.
Like, similar to how Elon's douchebaggery has tainted EVs for lots of people for no practical reason, the same has Altman's done for LLMs.
LLMs do not reason. Not hard to understand.
Idk, before this people from your camp were saying LLMs can't even understand anything. Always moving the goalposts. Then it'll be they can't feel or can't something else just to be pointlessly contrarian. Anyway, wrong idea.
There have been enough cases of models providing novel results that it's clear that whatever human trait they supposedly lack they don't really need. A car does not need legs, it does things differently. Having legs would even be a major detriment and would hold it back from achieving its top performance.
That's what those brain simulating projects are conceptually btw: cars with legs or planes with flapping wings. That's why they all fail, the approach makes no sense.
This will be the exact same argument in 20 years when we’ve got examples of robots that some fraction of people claim are conscious.
If LLMs could reason, they would flourish in barely understood topics, they dont. They repeat after what humans already said over and over again all across the training data. They are a parrot, its really not that hard to understand.
> They are a parrot
Those are some mighty parrots there, if they managed to get gold at IMO, IoI, and so on...
Well understood topics... what's so hard to understand?
>They repeat after what humans already said
>They are a parrot
Is it really much different from most people? The average Joe doesn't produce novel theories every day - he just rehashes what he's heard. Now the new goalpost seems to be that we can only say an LLM can "reason" if it matches Fields Medalists.
> Is it really much different from most people? The average Joe doesn't produce novel theories every day"
You've presented a false choice.
However the average Joe does indeed produce unique and novel thoughts every day. If it were not the case he would be brain dead. Each decision - wearing blue or red today - every tiny thought, action, feeling, indecision, crisis, or change of heart these are just as important.
The jury maybe out on how to judge what 'thought' actually is. However what it is not is perhaps easier to perceive. My digital thermometer does not think when it tells me the temperature.
My paper and pen version of the latest LLM (quite a large bit of paper and certainly a lot of ink I might add) also does not think.
I am surprised so many in the HN community have so quickly taken to assuming as fact that LLM's think or reason. Even anthropomorphising LLM's to this end.
For a group inclined to quickly calling out 'God of the gaps' they have quite quickly invented their very own 'emergence'.
What is "novel results"? A random UUID generator also gives "novel result", every time.
Even if we're to humor the "novel" part, have they actually come up with anything truly novel? New physics? New proofs of hard math problems that didn't exist before?
Yes, exactly. There are other papers, but Google proved it most definitively imo [0], an LLM now holds the state of the art for the lowest bound on a very specific graph problem.
[0] https://research.google/blog/ai-as-a-research-partner-advanc...
That's not an LLM. AlphaEvolve is a variant of genetic search for program synthesis. Very different from the chat bot that answers questions about ingrown toenails.
The normative importance of a fact may increase when more number of people start willfully ignoring it for shorter-term profit.
Imagine somebody in 2007: "It's so funny to me that people are still adamant about mortgage default risk after it's become a completely moot point because nobody cares in this housing market."
That’s nailing it really well: “willfully ignoring” is precisely what’s happening all around me. Me talking about small focused AI models, there you have everyone raving about AGI. Energy use and privacy issues of cloud vs local inference discussions end on how awesome the power of GPUs are and the jobs too. GPU backed finance with depreciation schedules past useful life seems OK for anyone chasing some short term gain. Even the job market is troubled, you can hardly tell a relevant candidate from an irrelevant one because everyone is an AI expert these days - hallucinations seem to make lying more casual.
It’s pretty clear to me there is a collective desire to ignore the problems to sell more GPU, close the next round, get that high paying AI job.
Part of me wishes humans would show the same dedication to fight climate change…
Didn't we have economists' consensus then about what's going to happen?
My point is a fact's popularity is not equal to its importance. That was a scenario to highlight how they can even have an inverse relationship.
Diving into how well/badly anybody predicted a certain economic future is a whole different can of worms.
That said: "The market can stay irrational longer than I can stay solvent." :p