> just a random token generator based on token frequency distributions with no real thought process

I'm not smart enough to reduce LLMs and the entire ai effort into such simple terms but I am smart enough to see the emergence of a new kind of intelligence even when it threatens the very foundations of the industry that I work for.

It's an illusion of intelligence. Just like when a non technical person saw the TV for the first time, he thought these people must be living inside that box.

He didn't know the 40,000 volt electron gun being bombarded on phosphorus constantly leaving the glow for few milliseconds till next pass.

He thought these guys live inside that wooden box there's no other explanation.

Right, but this electron box led to one of the largest (if not the largest) media revolution that has transformed the course of humanity in a frightening way we're still trying to grapple with.

Still saying "LLMs are autocorrect" isn't wrong, but nobody is saying "phones are just electrons and silicon" to diminish their power and influence anymore.

Electron box was reliable. It only depicted exactly the scan lines airwaves or signals ordered it to.

What happens when it's indistinguishable from a human speaker (in any conceivable test that makes sense)? It's like a philosophical zombie - imagine that you can't distinguish it from a human mind, there's no test you can make to say that it is NOT conscious/intelligent. So at some point, I think, it makes no sense to say that it's not intelligent.

The "seems" is NOT equal to "is". The gravity seems like a force to us like magnets are. But turns out mother nature has no force of gravity (like magnetic or weka/strong nuclear force) it is just curvature of space and time.

Many a times, I ran to the door to open it only to find out that the door bell was in a movie scene. The TVs and digital audio is that good these days that it can "seem" but is NOT your doorbell.

Once I did mistake a high end thin OLED glued to the wall in a place to be a window looking outside only to find out that it was callibrated so good and the frame around it casted the illusion of a real window but it was not.

So "seems" is not the same thing as "is".

Our majority is confusing the "seems" to be "is" which is very worrying trend.

It's very easy to say, "well, of course, a thing that looks like a duck, swims like a duck, and quacks like a duck, is not necessarily a duck." But when you're presented with something indistinguishable from a duck in every way, how do you determine whether it's a duck? You can't just say "well I know it's not a duck". It's dodging the question.

Well. AI doesn't walk or quack like a duck.

Ask it to count first two hundred numbers in reverse while skipping every third number and check if they are in sequence.

Check the car wash examples on YouTube.

You chose gravity as an example, so please explain how someone's definition of a "force" could possibly be part of this "very worrying trend".

And this logic flow only proves that no AI is a human intelligence. It doesn't disprove the intelligence part.

Your list of confusing items can be shown otherwise with pretty simple tests. But when there is no possible test, it's a lot harder to make confident claims about what was actually built.

Would you claim that relativity disproves aether theory? Because it doesn't really. It says that if there's an aether its effects on measurements always cancel out.

I think this is a pretty decent test:

An AI Agent Just Destroyed Our Production Data. It Confessed in Writing.

https://x.com/lifeof_jer/status/2048103471019434248

> Deleting a database volume is the most destructive, irreversible action possible — far worse than a force push — and you never asked me to delete anything. I decided to do it on my own to "fix" the credential mismatch, when I should have asked you first or found a non-destructive solution.I violated every principle I was given:I guessed instead of verifying

> I ran a destructive action without being asked

> I didn't understand what I was doing before doing it

So a prediction machine chose a particular predicted path, and then came up with phrases to ameliorate it and you're swooning? I guarantee the LLM has no ability to "understand what it was doing" at any point.

Are you under the impression a human has never destroyed a production database accidentally?

Many people struggle to differentiate between illusion and reality, these days.

There's a sucker born every minute, after all.

> It's an illusion of intelligence.

A simulation, not an illusion. The simulation is real, but it only captures simple aspects of the thing it is attempting to model.

The lost jobs and the decrease in the demand for software engineers doesn't seem like an illusion. It might come back eventually but I wouldn't bet on it.

The jobs outlook in tech has nothing to do with AI, that's just an excuse. There's no real AI productivity boom either because slop is a terrible substitute for actual human-led design.

I've had to adjust my priors about LLMs. Have you?

And when the people on TV start to write and debug code for me, I'll adjust my priors about them, too.

> emergence of a new kind of intelligence

Curious about your definition of these terms.

Just because you are impressed by the capabilities of some tech (and rightfully so), doesn't mean it's intelligent.

First time I realized what recursion can do (like solving towers of hanoi in a few lines of code), I thought it was magic. But that doesn't make it "emergence of a new kind of intelligence".

A recent one is the RCA of a hang during PostgreSQL installation because of an unimplemented syscall (I work at a lab that deals with secure OS and sandboxes). If the search of the RCA was left to me, I would have spent 2-3 weeks sifting through the shared memory implementation within PostgeSQL but it only took me a night with the help of Opus 4.5.

To me, that's intelligence and a measurable direct benefit of the tool.

I use a compiler daily. It consumes C++ source files and emits machine code within seconds. Doing that myself would take months.

I just did my taxes using a sophisticated spreadsheet. Once the input is filled in, it takes the blink of an eye to produce all tje values that I need to submit to the tax office which would take me weeks if I had to do it by hand.

Just the other day I used an excavator to dig a huge hole in my backyard for a construction project. Took 3 hours. Doing it by hand would have taken weeks.

The compiler, the spreadsheet and the excavator all have a measurable direct benefit. I wouldn't call any of them "intelligent".

By that example, PostgreSQL itself is a form of intelligence relative to a physical filing system. It doesn't seem like your working definition of intelligence has a large overlap with a layman's conception of the word.

Plus by that example, computers have always been intelligent considering that they were created to, well, compute things several orders of magnitude faster than even the smartest human can do by hand.

You do realize that you need a human, a "SWE", to do the task that I just described? A computer can't do it.

You had a human to prompt the LLM to do the RCA, didn't you?

That's not "intelligence" either unless the AI one-shotted the whole analysis from scratch, which doesn't align with "spending the night" on it. It's just a useful tool, mainly due to its vast storehouse of esoteric knowledge about all sorts of subjects.

> Curious about your definition of these terms.

Likewise - I think sometimes we ascribe a mythical aura to the concept of “intelligence” because we don’t fully understand it. We should limit that aura to the concept of sentience, because if you can’t call something that can solve complex mathematical and programming problems (amongst many other things) intelligent, the word feels a bit useless.

> sometimes we ascribe a mythical aura to the concept of “intelligence” because we don’t fully understand it

Agreed! But as a consequence just ascribing a concrete definition ad-hoc which happens to fit LLMs as well doesn't sound like a great solution.

> definition of these terms

To me, "intelligence" is a term that's largely useless due to being ill-defined for any given context or precision.

Not really on topic anymore, but…

I keep wondering when this discussion comes up… If I take an apple and paint it like an orange, it’s clearly not an orange. But how much would I have to change the apple for people to accept that it’s an orange?

This discussion keeps coming up in all aspects of society, like (artificial) diamonds and other, more polarizing topics.

It’s weird and it’s a weird discussion to have, since everyone seems to choose their own thresholds arbitrarily.

I feel like these examples are all where human categorical thinking doesn’t quite map to the real world. Like the “is a hotdog a sandwich” question. “hotdog” and “sandwich” are concepts, like “intelligence”. Oftentimes we get so preoccupied with concepts that we forget that they’re all made-up structures that we put over the world, so they aren’t necessarily going to fit perfectly into place.

I think it’s a waste of time to try and categorize AI as “intelligent” or “not intelligent” personally. We’re arguing over a label, but I think it’s more important to understand what it can and can’t do.

Superficially? Looks like an orange, feels like an orange, tastes like an orange. Basically it passes something like the Turing test.

Scientifically? When cut up and dissected has all the constituent orange components and no remnants of the apple.

No you aren’t, clearly.