Idk, before this people from your camp were saying LLMs can't even understand anything. Always moving the goalposts. Then it'll be they can't feel or can't something else just to be pointlessly contrarian. Anyway, wrong idea.

There have been enough cases of models providing novel results that it's clear that whatever human trait they supposedly lack they don't really need. A car does not need legs, it does things differently. Having legs would even be a major detriment and would hold it back from achieving its top performance.

That's what those brain simulating projects are conceptually btw: cars with legs or planes with flapping wings. That's why they all fail, the approach makes no sense.

This will be the exact same argument in 20 years when we’ve got examples of robots that some fraction of people claim are conscious.

If LLMs could reason, they would flourish in barely understood topics, they dont. They repeat after what humans already said over and over again all across the training data. They are a parrot, its really not that hard to understand.

> They are a parrot

Those are some mighty parrots there, if they managed to get gold at IMO, IoI, and so on...

Well understood topics... what's so hard to understand?

>They repeat after what humans already said

>They are a parrot

Is it really much different from most people? The average Joe doesn't produce novel theories every day - he just rehashes what he's heard. Now the new goalpost seems to be that we can only say an LLM can "reason" if it matches Fields Medalists.

> Is it really much different from most people? The average Joe doesn't produce novel theories every day"

You've presented a false choice.

However the average Joe does indeed produce unique and novel thoughts every day. If it were not the case he would be brain dead. Each decision - wearing blue or red today - every tiny thought, action, feeling, indecision, crisis, or change of heart these are just as important.

The jury maybe out on how to judge what 'thought' actually is. However what it is not is perhaps easier to perceive. My digital thermometer does not think when it tells me the temperature.

My paper and pen version of the latest LLM (quite a large bit of paper and certainly a lot of ink I might add) also does not think.

I am surprised so many in the HN community have so quickly taken to assuming as fact that LLM's think or reason. Even anthropomorphising LLM's to this end.

For a group inclined to quickly calling out 'God of the gaps' they have quite quickly invented their very own 'emergence'.

[deleted]

What is "novel results"? A random UUID generator also gives "novel result", every time.

Even if we're to humor the "novel" part, have they actually come up with anything truly novel? New physics? New proofs of hard math problems that didn't exist before?

Yes, exactly. There are other papers, but Google proved it most definitively imo [0], an LLM now holds the state of the art for the lowest bound on a very specific graph problem.

[0] https://research.google/blog/ai-as-a-research-partner-advanc...

That's not an LLM. AlphaEvolve is a variant of genetic search for program synthesis. Very different from the chat bot that answers questions about ingrown toenails.