If LLMs could reason, they would flourish in barely understood topics, they dont. They repeat after what humans already said over and over again all across the training data. They are a parrot, its really not that hard to understand.

> They are a parrot

Those are some mighty parrots there, if they managed to get gold at IMO, IoI, and so on...

Well understood topics... what's so hard to understand?

>They repeat after what humans already said

>They are a parrot

Is it really much different from most people? The average Joe doesn't produce novel theories every day - he just rehashes what he's heard. Now the new goalpost seems to be that we can only say an LLM can "reason" if it matches Fields Medalists.

> Is it really much different from most people? The average Joe doesn't produce novel theories every day"

You've presented a false choice.

However the average Joe does indeed produce unique and novel thoughts every day. If it were not the case he would be brain dead. Each decision - wearing blue or red today - every tiny thought, action, feeling, indecision, crisis, or change of heart these are just as important.

The jury maybe out on how to judge what 'thought' actually is. However what it is not is perhaps easier to perceive. My digital thermometer does not think when it tells me the temperature.

My paper and pen version of the latest LLM (quite a large bit of paper and certainly a lot of ink I might add) also does not think.

I am surprised so many in the HN community have so quickly taken to assuming as fact that LLM's think or reason. Even anthropomorphising LLM's to this end.

For a group inclined to quickly calling out 'God of the gaps' they have quite quickly invented their very own 'emergence'.

[deleted]