It's literally text autocomplete. You can dress it up however you want but it takes input text and outputs the most likely next sequence.

Reminds me of when microwaves first came out. Investors decided to go all in on "vibe cooking" (lit. cooking with vibrations) complete with microwave ranges (no conventional oven), until the public wizened up to the fact that there was in fact no cooking (Maillard reaction) involved in their vibe cooking. Took about 15-20 years but microwaves finally took their rightful place as a utility appliance rather than what they were touted as (a centerpiece). Pick up a microwave cookbook from the 50s for some laughs.

I sure hope you're not mocking the classic "Microwave cooking for one" book!

The mallard reaction is very possible in microwaves, but they use microwave-specific crockery. I think the vision was possibly killed by people not wanting to maintain a second set of crockery.

See here for a fun write-up: https://www.lesswrong.com/posts/8m6AM5qtPMjgTkEeD/my-journey...

That book came out much later than what I am talking about, when many workarounds like turn tables (and indeed, specialized crockery) were made available. This thing [0] for example, did not even have a turn table, and yet was created in an "all in" form factor for the American home. It was in production for nine years.

Perhaps we can liken these auxiliary advances to agents and harnesses in the analogy. In the end, despite the unbridled optimism from certain backers, we never solved the fundamental issue with microwaves: that they use electromagnetic waves for cooking, and that electromagnetic waves have certain undesirable properties for this application.

[0] https://americanhistory.si.edu/collections/object/nmah_10880...

They sure are great for reheating food though. The problem is that a lot of developers think they are Michelin chefs when in reality they are Olive Garden cooks reheating frozen meals.

Workarounds such as turntables. Good lord.

But I think the argument that microwaves are basically for heating things up and for essentially steaming a lot of vegetables. (I'll do one ear of corn in the microwave with pepper and spices.) I do have a thick microwave cookbook from the 70s or 80s but I've mostly only ever used it for vegetable cooking times. And probably less since I started roasting vegetables in the oven a lot of the time. I have cooked some of the other recipes but not for a very long time.

Understand that a lot of people don't have a lot of choice but I use mine (actually have a 4 in 1 when I had to replace the old one after it burst into flames and that's somewhat useful as a second oven).

This is a very good comparison, I'll be using it.

It just made me realize why I don't have those found memories of my mom's cooking. When we got our first microwave she went full on the vibe cooking and took years to realize how dumb it was.

I hope my kid doesn't get the same kind of memories about my weekend projects.

And same as vibe coding, microwaves just reheat old stuff and create bland food.

This is an unexpectedly apt comparison, and I appreciated it.

The Maillard reaction is not the be all and end all of cooking, mind.

There are still cooking functions on microwaves! And they still come with recipe books!

Hope never dies.

I like this analogy. Maybe microwaves put a few line cooks out of the job, but it didn't replace traditional cooking at all.

[flagged]

“But they’ve added RL so…!!!”

You are obviously right and I see examples of it everywhere.

E.g I asked Claude opus 4.7 (the latest/greatest) the other day “is a Rimworld year 60 days?”. The reply (paraphrased) “No, a Rimworld year is 4 seasons each of 15 days which is 60 days total”.

Equally, it gets confused about what is a mod or vanilla since it is just predicting based on what it read on forums, which are clearly ambiguous enough (to a dumb text predictor).

Maybe there was more in the context before that question? I just copy-pasted that question into Opus 4.7 and it replied:

    Yes. A RimWorld year is 60 days, split into four 15-day quadrums (Aprimay, Jugust, Septober, Decembary), each corresponding to a season.

And that is the reason why it is only autocomplete. You probably had less context than the poster before, so it could not mix stuff up. The poster before either had more memory or the search searched through more topics. And btw it’s really hard to only give access to some things.

Not even deterministic autocomplete.

Good job a handful of companies aren’t investing a trillion+ dollars in that.

Can you imagine how silly they’d look when everyone realised.

"Everyone is doing it ⇒ it must be right." See also: bloodletting, leaded gasoline and parachute pants.

[deleted]

Calling the technology "text auto complete" is not productive to the discussion. Less than a decade ago the idea that a computer could take a fuzzy human-readable description and turn it into executable code was science fiction, but now it's common place. As is the ability to write long form text, and be so hard to distinguish from real that placing an em dash in your text will cause an uproar on this forum. You can describe things by their fundamental functions and make many things sound elementary but I find it counter productive given the capabilities we've seen from this technology

> Calling the technology "text auto complete" is not productive to the discussion.

If pointing out the flawed approach to making something more productive isn't productive, then what do you consider to be productive?

> Less than a decade ago the idea that a computer could take a fuzzy human-readable description and turn it into executable code was science fiction

Cobol was sold to people on the idea that anyone could create something with fuzzy human readable description that would result in executable code. That was back in the 60s.

What lessons did we learn?

1) Leaving things to the people who make fuzzy human readable descriptions turns out to be a terrible way to have things implemented.

2) Slowly and deliberately thinking things through before, during, and after implementation always leads to better results.

It's a lesson that keeps needing to be re-learned by people who don't/can't look at things through a historical lens.

It was the same with cobol, as it was with programming in spreadsheets in the 80s, as it was with the nocode movement in the 00s, as it is now again with LLMs in the 20s, and it will be again with a future generation in the 40s.

---

> As is the ability to write long form text, and be so hard to distinguish from real that placing an em dash in your text will cause an uproar on this forum.

Long form text generation that is hard to distinguish from human authored text also goes back to the 60s.

That's when we got the first instances of the Eliza effect.

> You can describe things by their fundamental functions and make many things sound elementary but I find it counter productive given the capabilities we've seen from this technology

The capabilities we've seen are:

- Text prediction/generation

- Inducing the Eliza effect

Your house is literally just a box. You can dress it up however you want but it has 4 walls and a lid.

Mine has like, 8 walls, but sure. It's a box. Crucially, it was sold as a box. Not a thinking machine.

Your attempt at an analogy will make sense when someone tries to install a house as middle management at some company.

If you ignore all the complexity and discard every detail, it’s literally just a box. Yet curiously you aren’t living in a cardboard box, or an aluminum shed.

Point being, which you know and are being willfully ignorant about, is that it’s more complex than that. And you’ve neatly discarded the detail that they’re multi modal.

I will freely admit though, analogy is useless when interacting with someone who has already made up their mind.

I'm pretty sure it was sold as a house. That you understand that you can think of it as a box doesn't make it not a house. That's the point of the analogy.

The secret to woodworking is that everything is a box. The secret to AI is that everything is token matching.

[dead]

AI is a text autocomplete. This is tge best AI definition i heard and agree with 100% Thank you.

It's literally how they work. I think the magic that none of us really expected is that our languages, human and computer, are absurdly redundant. But I think it makes sense, in hindsight at least. When we say things it's usually not to add novel or unexpected information that comes out of nowhere, but to elaborate or illustrate a point that could often be summed up in 5 words. This response is perfect sample of such.

> AI is a text autocomplete. This is tge best AI definition i heard and agree with 100% Thank you.

To believe that first you would have to ignore tool calling, ReAct loops, and the whole agent feature. That would be silly.

> To believe that first you would have to ignore tool calling, ReAct loops, and the whole agent feature. That would be silly.

How?

It all still functions with text prediction

> It all still functions with text prediction

Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.

> It all still functions with text prediction

>> Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.

Woof, you're sounding mighty aggressive for someone with such a fundamental misunderstanding of the technology you are defending. Have you ever even actually implemented a system around an LLM, or do practice ~~voodoo~~ "prompt engineering"?

> I can point you to ReAct loops and tool-calling and agent-based systems.

Those are all implemented - quite literally - by parsing the *text* that the LLM *autocompletes* from the prompt.

Tool calling? The model emits JSON as it autocompletes the prompt, and the json is then parsed out and transformed into an HTTP call. The response is then appended to the ongoing prompt, and the LLM is called again to *autocomplete* more output.

"ReAct loops" and "agent based systems" are the same goddamn thing. You submit a prompt and parse the output. You can wrap it up in as many layers as you want but autocomplete with some additional parsing on the output is still fucking autocomplete.

If you're going to make such strong assertions, you should understand the technology underneath or you'll come off looking like a idiot.

> Tool calling? The model emits JSON as it autocompletes the prompt, and the json is then parsed out and transformed into an HTTP call.

No. Code assistants determine which tool they can execute to meet a specific goal. They pick the tool, the execute the tool (meaning, they build command line arguments, run the command line app, analyze output, assess outcome) as subtasks.

And they do it as part of ReAct loops. If the tool fails to run, code assistants can troubleshoot problems on the fly and adapt how to call then tool until they reach the goal.

> And they do it as part of ReAct loops. If the tool fails to run, code assistants can troubleshoot problems on the fly and adapt how to call then tool until they reach the goal.

Yeah, but fundamentally all of this is implemented as next token prediction, given the context (which the tool results are).

Honestly, it's pretty amazing how much we can do with next token prediction, but that's essentially all that's happening here.

> I can point you to ReAct loops and tool-calling and agent-based systems.

Those literally work with text prediction.

If you take the text prediction out of it, nothing happens.

You stick a harness around a text predictor which then triggers the text predictor.

If you think I am missing something then please do point it out.

[deleted]

Is "text autocomplete" supposed to be an insult? To text auto-complete a physicist I would have to understand physics as well as them. To text-autocomplete your words I would need to model your brain.

By design. At least until we move away from attention being at the core of LLMs

It's not attention that's the problem, it's how we train networks offline with backprop.

LLMs are the most successful form of neural network we have, and that's because they are token prediction machines. Token predictors are easy to train because we're surrounded by written text - there's data nicely structured for use as training data for token prediction everywhere, free for the taking (especially if you ignore copyright law and robots.txt and crawl the entire web).

We can't train an LLM to have a more complex internal thought loop because there's no way to synthesize or acquire that internal training data in a way where you could perform backprop training with it.

Even "train of thought" models are reducing complex thoughts to simple token space as they iterate, and that is required because backprop only works when you can compute the delta between <input state> and <desired output state>. It can't work for anything more complicated or recursive than that.

For a good reminder for people on the limitations of AI (or well OAI gpt 5.3 default model for non paying users), I did an experiment recently (Just a week-ish ago): https://smileplease.mataroa.blog/blog/how-many-e-are-in-stra...

image: https://mataroa.blog/images/b5c65214.png

but it says that there are 3 e's in strawberry ;)

Now this is literally something which occurs because of it being text autocomplete and the inherent issue of token based Large language models. So you are literally right :D

My point is that AI can have its issues and it can have its plus points (just like text autocomplete but some suggest its on steroids)

The issue to me feels like we are hammering it in absolutely everything and anything, perhaps it should be used more selectively, y'know, like perhaps a tool?

Yes, AI should be used as a tool for very specific things. Ones it’s trained on everything it’s completely useless. Anyone who is trying to use it for everyone will fail. I predict by 2030 (if not much sooner) ai bubble will burst. The only good outcome will be all this hardware used will be lequdated for pennies. Mark this prediction it will happen ;-)

I definitely hope you're right

> Mark this prediction it will happen

But this historically is a very strong predictor of a poor prediction

If you’re so sure just make a leveraged bet and become a millionaire. Put your food where your mouth is if you’re so convinced

Knowing it will fail is one thing but knowing when and how things blow up is another.

Grok auto: 1 “Strawberry” has only one “e”. S T R A W B E R R Y

Gemini: There is *1* "e" in the word "strawberry".

Seems fine

So you have subscriptions to all the hyperscalers and make them vote on what's the correct answer?

Your brain is also an autocomplete at this point. Notice how you write each word, one after the other, flawlessly

Your comment was also completion.

This retort doesn't make any sense. Take humanity back perhaps 40k years ago and language did not even yet exist. Our token base was 0. Put an LLM in that scenario and it will endlessly cycle on nothing and produce nothing, stuck in a snapshot in time. Put humans in that situation, and soon enough you get us.

This is like saying that somebody speaking Chinese is just playing the Chinese Room [1] experiment. The only reason it's less immediately obviously absurd here is because the black box nature of LLMs obfuscates their relatively basic algorithmic functionality and let's people anthropomorphize it into being a brain.

[1] - https://en.wikipedia.org/wiki/Chinese_room

If that is the argument though, current AI aren't just autocomplete - because we could reasonably show an AI an image or a video and have them call a tool rather than return text. That'd be comparable to a pre-language human.

> Take humanity back perhaps 40k years ago and language did not even yet exist.

This is not quite accurate. The human lips, throat etc have evolved to be better at producing speech, which indicates that it's not that recent. And that it was a factor in the success of groups who could do it better than others.

It likely started "no later than 150,000 to 200,000 years ago."

sources:

https://en.wikipedia.org/wiki/Origin_of_speech#Evolution_of_...

https://pmc.ncbi.nlm.nih.gov/articles/PMC5525259/

And what would that make yours?

I think, therefore I am. You parrot, therefore you are... ?

Sufficiently good text autocomplete is indistinguishable from intelligence to an impartial observer, and that's the only honest criterion for intelligence.

Can't tell if sarcasm

I'm a little shocked that people discussing this topic could be so far apart! I'm completely serious.

Have you ever thought about how you would determine if an arbitrary given entity is intelligent or not? I think you'll agree it would require some kind of test. You might agree that the test would have to involve bidirectional interaction (since otherwise it would be impossible to distinguish an actual person from a recording of one).

> It's literally text autocomplete. You can dress it up however you want but it takes input text and outputs the most likely next sequence.

Last year this level of ignorance and cluelessness was amusing. Nowadays it's just sad and disappointing. It's like looking at a computer and downplaying it as something that just flips switches on and off.

Gitlab is looking to lay off people like him. All major tech companies are currently raiding to fire such employees.

Yeah they all want to fire the guys who can make sense of the mess the vibe coders are doing and try to stop it.

It will be interesting in the next few years. Assuming we won't be in the 3rd world war thanks to the USA and will have much bigger concerns.

> Yeah they all want to fire the guys who can make sense of the mess the vibe coders are doing and try to stop it.

You're grossly inflating the level of contribution from your average software developer. Are we supposed to believe that the same people who generated the high volume of mess that plagues legacy systems are now somehow suddenly exemplary craftsmen?

Also, it takes a huge volume of wilful ignorance and self delusion to fool yourself into believing that today's vibecoders are anyone other than yesterday's software developers. The criticism you are directing towards vibecoding is actually a criticism of your average developer's output reflecting their skill and know-how once their coding output outpaces or even ignores any kind of feedback from competent and experienced engineers.

What I see is a need to shit on a tool to try to inflate your sense if self worth.

I've seen which developers became vibecoders. They were the people I'd have wished to get rid of.

The ones who never acknowledge a mistake even if the process is crashing; the ones who put "return true" in a test so that the test doesn't execute and will insist that you broke their code if you remove the return true and when the test actually runs it fails; the ones who read a blog post about some new thing and decide we need to do like that; the ones who will write code that fails and then be nowhere to be seen when there is customer support to do.

> I've seen which developers became vibecoders. They were the people I'd have wished to get rid of.

Trying to portray everyone who ever used a tool as the incompetent cohort is an exercise in self-delusion.

Using AI ≠ vibecoding.

> Gitlab is looking to lay off people like him. All major tech companies are currently raiding to fire such employees.

Gitlab has been strapped for cash and desperately seeking a buyer to cash out for years.

If anything, the LLM revolution represents an opportunity that Gitlab is failing to capitalize upon. They have a privileged position to develop pick axes for this gold rush, but apparently they are choosing to dismiss themselves from the race altogether.

Gitlab's decision is being taken in spite of LLMs, not because of them. Enough of this tired meme.

Ahh, are we there yet? Has non-deterministic computer use eroded your mind so much that you are starting to question the binary system? You know, the insight that computers are something that flips switches on and off is rather old, and I have heard it uttered (although slightly humorously) several times already, nobody ever raising any eyebrow hearing it.