I don't trust anyone who claims that LLMs today are superhumanly intelligent. All they do is perform compute-intensive brute-force attacks on the problem/solution space and call it 'reasoning', all while subsidising the real costs to capture the market. So much SciFi BS and extrapolation about a technology that is useful if adopted with care.

This technology needs to become a commodity to destroy this aggregation of power between a few organizations with untrustworthy incentives and leadership.

Your brain is performing "compute-intensive brute-force attacks on the problem/solution space" as you read this very sentence. You trained patterns on English syntax, structure, and semantics since you were a child and it is supporting you now with inference (or interpretation). And, for compute efficiency, you probably have evolution to thank.

people like to say this like they’re apples to apples but this comparison isn’t remotely how the brain actually works - and even if it did, the brain does it automatically without direction and at an infitesimal percentage of the power required.

And we’re just talking about cognition - it completely ignores the automatic processes such as maintaining and regulating the body and it’s hormones, coordinating and maintaining muscles, visual/spacial processing taking in massive amounts of data at a very fine scale, and informing the body what to do with it - could go on.

One of the more annoying things about this conversation is you don’t even need to make this argument to make the point you’re trying to make, but people love doing it anyway. It needlessly reduces how amazing the human brain is to a bunch of catchy sci fi sounding idioms.

It can be simultaneously true that transformer based language models can be very smart and that the human brain is also very smart. It genuinely confuses me why people need to make it an either/or.

Thank you, this comparison has been a huge annoyance of mine for the past 3 years of... this same debate over and over.

I think it's the hubris that I find most offensive in this argument: a guy knows one complex thing (programming) and suddenly thinks he can make claims about neuroscience.

Great post

Human cognition is nothing like AI "cognition." It really bothers me that people think AI is doing the same thing the human mind does. AI is more like a parrot which is trained to give a correct-looking response to any question. The parrot doesn't think, doesn't know what its doing etc, it just does it because it gets a treat every time a "good" answer is prompted. This is why it can't do things like know how many parenthesis are balanced here ((((()))))) (you can test this), it doesn't have any kind of genuine cognition.

I love reading posts like this. When you were a child, learning math or grammar, do you not remember bouncing off the walls of incorrect answers, eventually landing on a trajectory down the corridor of the right answer? Or were you always instantly zero-shotting everything?

In my experience, this is exactly how language models solve hard new problems, and largely how I solve them too. Propose a new idea, see if it works, iterate if not, keep going until it works.

Of course you can see how to solve a problem that you've seen before, like a visual puzzle about balanced parentheses. We're hyper specialized to visually identify asymmetries. LMs don't have eyes. Your mockery proves nothing.

The mistake in these types of arguments is that natural, classical-artificial, and/or neural-net-artificial learning methods all employ some kind of counterexample/counterfactual reasoning, but their underlying methods could well be fundamentally different. Thus these arguments are invalid, until computer science advances enough to explain what the differences and similarities actually are.

> Human cognition is nothing like AI "cognition."

I've wondered about this. Do we really know enough about what the human brain is doing to make a statement like this? I feel like if we did, we would be able to model it faithfully and OpenAI, etc. would not be doing what they're doing with LLMs.

What if human cognition turns out to be the biological equivalent of a really well-tuned prediction machine, and LLMs are just a more rudimentary and less-efficient version of this?

Yes, we do. Humans share the statistical association ability that LLMs possess, but also conscious meaning and understanding. This is a difference in kind and means that we can generalize beyond the statistical pattern associations that we've extracted from data, so we don't require trillions of examples to develop knowledge.

Theoretically a human could sit alone in a dark room, knowing nothing of mathematics and come up with numbers, arithmetic algebra, etc...

They don't need to read every math textbook, paper, and online discussion in existence.

Our DNA does contain our pre-training, though. It's not true that we're an entirely blank slate.

Pre-training is not a good term if you are trying to compare it to LLM pre-training. Closer would be the model's architecture and learning algorithms which has been designed through decades of PhD research, and my point on that is that the differences are still much greater than the similarities.

The point I'm trying to make is that I don't think we know, so we can't say either way.

In your example, would the human have ever had contact with other humans, or would it be placed in the room as a baby with no further input?

This is such a boring cliche by now. "thinking" and "knowing what it's doing" are totally vague statements that we barely understand about the human mind but in every comment section about AI people definitively state that LLMs don't do them, whatever they are.

This is the epitome of learned helplessness, that you need a neuroscience paper to tell you what thinking and knowledge is when you experience it directly all the time, and can't tell that an LLM doesn't have it. Something is extremely evil about these ideologies that are teaching people that they are NPCs.

They aren't so vague that you would argue the parrot is thinking.

> Human cognition is nothing like AI "cognition." It really bothers me that people think AI is doing the same thing the human mind does.

This might sound callous, but I wonder if people saying this themselves have very limited brains more akin to stochastic parrots rather the average homo sapiens.

We are very different, and there are some high-profile people that don't even have an internal monologue or self-introspection abilities (one of the other symptoms is having an egg-shaped head)

AI is more like a parrot which is trained to give a correct-looking response to any question.

A parrot that writes better code and English prose than I do?

I would like to buy your parrot.

If you think this way then why not talk to LLMs exclusively. Don’t let the oxytocin cloud your ability to problem solve.

I get you're trying to do the whole "humans and LLMs are the same" bit, but it's just plainly false. Please stop.

> All they do is perform compute-intensive brute-force attacks on the problem/solution space and call it 'reasoning'

If they discover the cure to cancer, I don't care how they did it. "I don't trust anyone who claims they're superhumanly intelligent" doesn't follow from "all they do is <how they work>".

Has generative AI made material progress on curing cancer? Has it produced any breakthroughs, at all?

In b4

- it’s the worst it’ll ever be - big leaps happened the fast few months bro

Etc.

Personally I think llm’s can be very powerful in a narrow-band. But the more substance a thing involves, the more a human is needed to be involved.

> "I don't trust anyone who claims they're intelligent" doesn't follow from "all they do is <how they work>".

It kind of does if how they work is nothing like genuine intelligence. You can (rightly) think AI is incredible and amazing and going to bring us amazing new medical technologies, without wrongly thinking its super amazing pattern recognition is the same thing as genuine intelligence. It should be worrying if people begin to believe the stochastic parrot is actually wise.

I can slow down the compute by a factor of a thousand. It would not change the result. But it changes the economics. We only call it intelligent, because we can do the backpropagation, the inference (and training) fast enough and with enough memory for it to appear this way.

If LLMs can come up with superhumanly intelligent solutions, then they're superhumanly intelligent, period. Whether they do this by magic or by stochastic whatever doesn't make any difference at all.

Like..a calculator?

Take a calculator to the International Math Olympiad and let's see how you do.

That's moonshot logic that reinforces the parent's point. You'd absolutely care if the AI's cure to cancer entailed full-body transplants or dismemberment.

> You'd absolutely care if the AI's cure to cancer entailed full-body transplants or dismemberment

That's not a cure. Like yes, I'd care if the AI says it cures cancer while nuking Chicago. But that isn't what OP said.

"The cure for cancer" as a phrase doesn't include those solutions. If the headline was "Pope discovers the cure for cancer" and those were his solutions you would say "No he didn't." OP was referring to AI discovering the cure for cancer that cancer research is working towards.

If all they do is "just" brute-force problem solving, then they are already bound to take over R&D & other knowledge work and exponentially accelerate progress, i.e. the SciFi "singularity" BS ends up happening all the same. Whether we classify them as true reasoning is just semantics.

calculator is superhumanly intelligent

Yeah and everything is just atoms. If you reduce anything enough it’s not real.