A lot of the conclusions they're drawing in this post about the "agentic era" seem quite misguided and some don't really seem to make sense.
I have no doubt GitLab has too many employees and can benefit from being a more focused company, but it's tiring reading these layoff posts so chock full of buzzwords. I guess they're desperately hoping if they prognosticate about AI enough it will placate the investors.
Let these people keep betting their companies, futures and net competency on text autocomplete. The future is bright for me and everyone else that isn't falling for it.
Calling it text autocomplete is played out and really just makes you look bad at this point.
It's literally text autocomplete. You can dress it up however you want but it takes input text and outputs the most likely next sequence.
Reminds me of when microwaves first came out. Investors decided to go all in on "vibe cooking" (lit. cooking with vibrations) complete with microwave ranges (no conventional oven), until the public wizened up to the fact that there was in fact no cooking (Maillard reaction) involved in their vibe cooking. Took about 15-20 years but microwaves finally took their rightful place as a utility appliance rather than what they were touted as (a centerpiece). Pick up a microwave cookbook from the 50s for some laughs.
I sure hope you're not mocking the classic "Microwave cooking for one" book!
The mallard reaction is very possible in microwaves, but they use microwave-specific crockery. I think the vision was possibly killed by people not wanting to maintain a second set of crockery.
See here for a fun write-up: https://www.lesswrong.com/posts/8m6AM5qtPMjgTkEeD/my-journey...
That book came out much later than what I am talking about, when many workarounds like turn tables (and indeed, specialized crockery) were made available. This thing [0] for example, did not even have a turn table, and yet was created in an "all in" form factor for the American home. It was in production for nine years.
Perhaps we can liken these auxiliary advances to agents and harnesses in the analogy. In the end, despite the unbridled optimism from certain backers, we never solved the fundamental issue with microwaves: that they use electromagnetic waves for cooking, and that electromagnetic waves have certain undesirable properties for this application.
[0] https://americanhistory.si.edu/collections/object/nmah_10880...
They sure are great for reheating food though. The problem is that a lot of developers think they are Michelin chefs when in reality they are Olive Garden cooks reheating frozen meals.
Workarounds such as turntables. Good lord.
But I think the argument that microwaves are basically for heating things up and for essentially steaming a lot of vegetables. (I'll do one ear of corn in the microwave with pepper and spices.) I do have a thick microwave cookbook from the 70s or 80s but I've mostly only ever used it for vegetable cooking times. And probably less since I started roasting vegetables in the oven a lot of the time. I have cooked some of the other recipes but not for a very long time.
Understand that a lot of people don't have a lot of choice but I use mine (actually have a 4 in 1 when I had to replace the old one after it burst into flames and that's somewhat useful as a second oven).
This is a very good comparison, I'll be using it.
It just made me realize why I don't have those found memories of my mom's cooking. When we got our first microwave she went full on the vibe cooking and took years to realize how dumb it was.
I hope my kid doesn't get the same kind of memories about my weekend projects.
And same as vibe coding, microwaves just reheat old stuff and create bland food.
This is an unexpectedly apt comparison, and I appreciated it.
The Maillard reaction is not the be all and end all of cooking, mind.
There are still cooking functions on microwaves! And they still come with recipe books!
Hope never dies.
I like this analogy. Maybe microwaves put a few line cooks out of the job, but it didn't replace traditional cooking at all.
[flagged]
“But they’ve added RL so…!!!”
You are obviously right and I see examples of it everywhere.
E.g I asked Claude opus 4.7 (the latest/greatest) the other day “is a Rimworld year 60 days?”. The reply (paraphrased) “No, a Rimworld year is 4 seasons each of 15 days which is 60 days total”.
Equally, it gets confused about what is a mod or vanilla since it is just predicting based on what it read on forums, which are clearly ambiguous enough (to a dumb text predictor).
Maybe there was more in the context before that question? I just copy-pasted that question into Opus 4.7 and it replied:
And that is the reason why it is only autocomplete. You probably had less context than the poster before, so it could not mix stuff up. The poster before either had more memory or the search searched through more topics. And btw it’s really hard to only give access to some things.
Not even deterministic autocomplete.
Good job a handful of companies aren’t investing a trillion+ dollars in that.
Can you imagine how silly they’d look when everyone realised.
"Everyone is doing it ⇒ it must be right." See also: bloodletting, leaded gasoline and parachute pants.
Calling the technology "text auto complete" is not productive to the discussion. Less than a decade ago the idea that a computer could take a fuzzy human-readable description and turn it into executable code was science fiction, but now it's common place. As is the ability to write long form text, and be so hard to distinguish from real that placing an em dash in your text will cause an uproar on this forum. You can describe things by their fundamental functions and make many things sound elementary but I find it counter productive given the capabilities we've seen from this technology
> Calling the technology "text auto complete" is not productive to the discussion.
If pointing out the flawed approach to making something more productive isn't productive, then what do you consider to be productive?
> Less than a decade ago the idea that a computer could take a fuzzy human-readable description and turn it into executable code was science fiction
Cobol was sold to people on the idea that anyone could create something with fuzzy human readable description that would result in executable code. That was back in the 60s.
What lessons did we learn?
1) Leaving things to the people who make fuzzy human readable descriptions turns out to be a terrible way to have things implemented.
2) Slowly and deliberately thinking things through before, during, and after implementation always leads to better results.
It's a lesson that keeps needing to be re-learned by people who don't/can't look at things through a historical lens.
It was the same with cobol, as it was with programming in spreadsheets in the 80s, as it was with the nocode movement in the 00s, as it is now again with LLMs in the 20s, and it will be again with a future generation in the 40s.
---
> As is the ability to write long form text, and be so hard to distinguish from real that placing an em dash in your text will cause an uproar on this forum.
Long form text generation that is hard to distinguish from human authored text also goes back to the 60s.
That's when we got the first instances of the Eliza effect.
> You can describe things by their fundamental functions and make many things sound elementary but I find it counter productive given the capabilities we've seen from this technology
The capabilities we've seen are:
- Text prediction/generation
- Inducing the Eliza effect
Your house is literally just a box. You can dress it up however you want but it has 4 walls and a lid.
Mine has like, 8 walls, but sure. It's a box. Crucially, it was sold as a box. Not a thinking machine.
Your attempt at an analogy will make sense when someone tries to install a house as middle management at some company.
If you ignore all the complexity and discard every detail, it’s literally just a box. Yet curiously you aren’t living in a cardboard box, or an aluminum shed.
Point being, which you know and are being willfully ignorant about, is that it’s more complex than that. And you’ve neatly discarded the detail that they’re multi modal.
I will freely admit though, analogy is useless when interacting with someone who has already made up their mind.
I'm pretty sure it was sold as a house. That you understand that you can think of it as a box doesn't make it not a house. That's the point of the analogy.
The secret to woodworking is that everything is a box. The secret to AI is that everything is token matching.
[dead]
AI is a text autocomplete. This is tge best AI definition i heard and agree with 100% Thank you.
It's literally how they work. I think the magic that none of us really expected is that our languages, human and computer, are absurdly redundant. But I think it makes sense, in hindsight at least. When we say things it's usually not to add novel or unexpected information that comes out of nowhere, but to elaborate or illustrate a point that could often be summed up in 5 words. This response is perfect sample of such.
> AI is a text autocomplete. This is tge best AI definition i heard and agree with 100% Thank you.
To believe that first you would have to ignore tool calling, ReAct loops, and the whole agent feature. That would be silly.
> To believe that first you would have to ignore tool calling, ReAct loops, and the whole agent feature. That would be silly.
How?
It all still functions with text prediction
> It all still functions with text prediction
Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.
> It all still functions with text prediction
>> Wilful ignorance can't be fixed. As the saying goes, you can lead a horse to water but you can't make it drink. I can point you to ReAct loops and tool-calling and agent-based systems. If after being pointed those you still choose to be stuck on the "it's just text prediction" then that's a problem you are creating for yourself, and only you can get unstuck on a problem of your own making.
Woof, you're sounding mighty aggressive for someone with such a fundamental misunderstanding of the technology you are defending. Have you ever even actually implemented a system around an LLM, or do practice ~~voodoo~~ "prompt engineering"?
> I can point you to ReAct loops and tool-calling and agent-based systems.
Those are all implemented - quite literally - by parsing the *text* that the LLM *autocompletes* from the prompt.
Tool calling? The model emits JSON as it autocompletes the prompt, and the json is then parsed out and transformed into an HTTP call. The response is then appended to the ongoing prompt, and the LLM is called again to *autocomplete* more output.
"ReAct loops" and "agent based systems" are the same goddamn thing. You submit a prompt and parse the output. You can wrap it up in as many layers as you want but autocomplete with some additional parsing on the output is still fucking autocomplete.
If you're going to make such strong assertions, you should understand the technology underneath or you'll come off looking like a idiot.
> Tool calling? The model emits JSON as it autocompletes the prompt, and the json is then parsed out and transformed into an HTTP call.
No. Code assistants determine which tool they can execute to meet a specific goal. They pick the tool, the execute the tool (meaning, they build command line arguments, run the command line app, analyze output, assess outcome) as subtasks.
And they do it as part of ReAct loops. If the tool fails to run, code assistants can troubleshoot problems on the fly and adapt how to call then tool until they reach the goal.
> And they do it as part of ReAct loops. If the tool fails to run, code assistants can troubleshoot problems on the fly and adapt how to call then tool until they reach the goal.
Yeah, but fundamentally all of this is implemented as next token prediction, given the context (which the tool results are).
Honestly, it's pretty amazing how much we can do with next token prediction, but that's essentially all that's happening here.
> I can point you to ReAct loops and tool-calling and agent-based systems.
Those literally work with text prediction.
If you take the text prediction out of it, nothing happens.
You stick a harness around a text predictor which then triggers the text predictor.
If you think I am missing something then please do point it out.
Is "text autocomplete" supposed to be an insult? To text auto-complete a physicist I would have to understand physics as well as them. To text-autocomplete your words I would need to model your brain.
By design. At least until we move away from attention being at the core of LLMs
It's not attention that's the problem, it's how we train networks offline with backprop.
LLMs are the most successful form of neural network we have, and that's because they are token prediction machines. Token predictors are easy to train because we're surrounded by written text - there's data nicely structured for use as training data for token prediction everywhere, free for the taking (especially if you ignore copyright law and robots.txt and crawl the entire web).
We can't train an LLM to have a more complex internal thought loop because there's no way to synthesize or acquire that internal training data in a way where you could perform backprop training with it.
Even "train of thought" models are reducing complex thoughts to simple token space as they iterate, and that is required because backprop only works when you can compute the delta between <input state> and <desired output state>. It can't work for anything more complicated or recursive than that.
For a good reminder for people on the limitations of AI (or well OAI gpt 5.3 default model for non paying users), I did an experiment recently (Just a week-ish ago): https://smileplease.mataroa.blog/blog/how-many-e-are-in-stra...
image: https://mataroa.blog/images/b5c65214.png
but it says that there are 3 e's in strawberry ;)
Now this is literally something which occurs because of it being text autocomplete and the inherent issue of token based Large language models. So you are literally right :D
My point is that AI can have its issues and it can have its plus points (just like text autocomplete but some suggest its on steroids)
The issue to me feels like we are hammering it in absolutely everything and anything, perhaps it should be used more selectively, y'know, like perhaps a tool?
Yes, AI should be used as a tool for very specific things. Ones it’s trained on everything it’s completely useless. Anyone who is trying to use it for everyone will fail. I predict by 2030 (if not much sooner) ai bubble will burst. The only good outcome will be all this hardware used will be lequdated for pennies. Mark this prediction it will happen ;-)
I definitely hope you're right
> Mark this prediction it will happen
But this historically is a very strong predictor of a poor prediction
If you’re so sure just make a leveraged bet and become a millionaire. Put your food where your mouth is if you’re so convinced
Knowing it will fail is one thing but knowing when and how things blow up is another.
Grok auto: 1 “Strawberry” has only one “e”. S T R A W B E R R Y
Gemini: There is *1* "e" in the word "strawberry".
Seems fine
They meant to say the letter "r".
See: https://fediverse.zachleat.com/@zachleat/116529994444529036
So you have subscriptions to all the hyperscalers and make them vote on what's the correct answer?
Your brain is also an autocomplete at this point. Notice how you write each word, one after the other, flawlessly
Your comment was also completion.
This retort doesn't make any sense. Take humanity back perhaps 40k years ago and language did not even yet exist. Our token base was 0. Put an LLM in that scenario and it will endlessly cycle on nothing and produce nothing, stuck in a snapshot in time. Put humans in that situation, and soon enough you get us.
This is like saying that somebody speaking Chinese is just playing the Chinese Room [1] experiment. The only reason it's less immediately obviously absurd here is because the black box nature of LLMs obfuscates their relatively basic algorithmic functionality and let's people anthropomorphize it into being a brain.
[1] - https://en.wikipedia.org/wiki/Chinese_room
If that is the argument though, current AI aren't just autocomplete - because we could reasonably show an AI an image or a video and have them call a tool rather than return text. That'd be comparable to a pre-language human.
> Take humanity back perhaps 40k years ago and language did not even yet exist.
This is not quite accurate. The human lips, throat etc have evolved to be better at producing speech, which indicates that it's not that recent. And that it was a factor in the success of groups who could do it better than others.
It likely started "no later than 150,000 to 200,000 years ago."
sources:
https://en.wikipedia.org/wiki/Origin_of_speech#Evolution_of_...
https://pmc.ncbi.nlm.nih.gov/articles/PMC5525259/
And what would that make yours?
I think, therefore I am. You parrot, therefore you are... ?
Sufficiently good text autocomplete is indistinguishable from intelligence to an impartial observer, and that's the only honest criterion for intelligence.
Can't tell if sarcasm
I'm a little shocked that people discussing this topic could be so far apart! I'm completely serious.
Have you ever thought about how you would determine if an arbitrary given entity is intelligent or not? I think you'll agree it would require some kind of test. You might agree that the test would have to involve bidirectional interaction (since otherwise it would be impossible to distinguish an actual person from a recording of one).
> It's literally text autocomplete. You can dress it up however you want but it takes input text and outputs the most likely next sequence.
Last year this level of ignorance and cluelessness was amusing. Nowadays it's just sad and disappointing. It's like looking at a computer and downplaying it as something that just flips switches on and off.
Gitlab is looking to lay off people like him. All major tech companies are currently raiding to fire such employees.
Yeah they all want to fire the guys who can make sense of the mess the vibe coders are doing and try to stop it.
It will be interesting in the next few years. Assuming we won't be in the 3rd world war thanks to the USA and will have much bigger concerns.
> Yeah they all want to fire the guys who can make sense of the mess the vibe coders are doing and try to stop it.
You're grossly inflating the level of contribution from your average software developer. Are we supposed to believe that the same people who generated the high volume of mess that plagues legacy systems are now somehow suddenly exemplary craftsmen?
Also, it takes a huge volume of wilful ignorance and self delusion to fool yourself into believing that today's vibecoders are anyone other than yesterday's software developers. The criticism you are directing towards vibecoding is actually a criticism of your average developer's output reflecting their skill and know-how once their coding output outpaces or even ignores any kind of feedback from competent and experienced engineers.
What I see is a need to shit on a tool to try to inflate your sense if self worth.
I've seen which developers became vibecoders. They were the people I'd have wished to get rid of.
The ones who never acknowledge a mistake even if the process is crashing; the ones who put "return true" in a test so that the test doesn't execute and will insist that you broke their code if you remove the return true and when the test actually runs it fails; the ones who read a blog post about some new thing and decide we need to do like that; the ones who will write code that fails and then be nowhere to be seen when there is customer support to do.
> I've seen which developers became vibecoders. They were the people I'd have wished to get rid of.
Trying to portray everyone who ever used a tool as the incompetent cohort is an exercise in self-delusion.
Using AI ≠ vibecoding.
> Gitlab is looking to lay off people like him. All major tech companies are currently raiding to fire such employees.
Gitlab has been strapped for cash and desperately seeking a buyer to cash out for years.
If anything, the LLM revolution represents an opportunity that Gitlab is failing to capitalize upon. They have a privileged position to develop pick axes for this gold rush, but apparently they are choosing to dismiss themselves from the race altogether.
Gitlab's decision is being taken in spite of LLMs, not because of them. Enough of this tired meme.
Ahh, are we there yet? Has non-deterministic computer use eroded your mind so much that you are starting to question the binary system? You know, the insight that computers are something that flips switches on and off is rather old, and I have heard it uttered (although slightly humorously) several times already, nobody ever raising any eyebrow hearing it.
I've used this text autocomplete to autocomplete me a Python setup and to autocomplete automatically running it.
It that scrapes Hn it works. Ironically, it's why I'm here.
Why would you scrape HN when they offer free, generous API?
Autocomplete only 'knew' how to output a scraper...
Not true, I tried just now. Took 30 seconds of due diligence. You could have done this too. Do better.
The problem is they’ll do what you ask. And if you are the type of non-curious person who replies “ Autocomplete only 'knew' how to output a scraper...”, then you’ll tell it to make you a scraper instead of ask what your options are for getting HN data.
[dead]
My comment is the autocompletion to your prompt.
My job is just text autocomplete.
Insane to see this kind of comment on Hacker News. I suspect it's satire!
"Text autocomplete" is literally what you just did.
What makes you think you're anything more than a 'text autocompleter'? You are justing auto completing someone's comment
If you seriously cannot tell what is the difference between a human being and a LLM and think they are both "autocompleters", you know very little about both humans and LLMs.
with all my ears
This thought that “maybe we are just next token predictors too” is not particularly clever. Most of us have thought about that, but a bit of experience with LLMs make it obvious that’s not what’s going on here. I think it’s a bit like listening to a recording of a person and swearing there’s an actual person in the recording device because the audible output is indistinguishable from the real thing. Why would you do that? You wouldn’t unless you have no idea how a recording device works, in which case it seems like magic.
> a bit of experience with LLMs make it obvious that’s not what’s going on here
I feel like that overstates the point quite a bit. There's a lot that's similar: neurotransmitter release is stochastic at the vesicle level, ion channels open and close probabilistically, post-synaptic responses have noise. A given neuron receiving identical input twice doesn't produce identical output. Neither brains nor LLMs have a central decider that forms intent and then implements it. In both, decisions emerges from network dynamics, they're a description of what the system did, not a separate cause (see Libet's experiments).
Now pretty clearly there's a lot that's different, and of course we don't understand brains enough to say just how similar they are to LLMs, but that's the point: it's an interesting thought experiment and shutting it down with a virtual eyeroll is sad.
A one-way audio channel is indeed too weak for a person to distinguish a person from a recording, but a bidirectional audio channel is easily strong enough: the person can verbally ask the person-or-recording a question and see if it is acknowledged.
I claim that a modern frontier LLM can be given simple instructions that make it impossible for a person to reliably distinguish it from a person over a bidirectional text-only medium.
Thank you for you completion
>some don't really seem to make sense.
This one stood out to me:
>Machine-scale infrastructure. [...] Git itself wasn't designed for that load, and bolting AI onto platforms not built for agents is the biggest mistake of this era. [...] Git itself is being reengineered for machine scale.
Git itself is so far down the list of bottlenecks that do or could hamper LLM-driven development, even projecting years into the future...
I wholeheartedly disagree.
Git has always been one of the biggest perf bottlenecks inside of the product.
First for any scaled deploy we recommended NFS. We were young and dumb and it was too slow. (We’ve all been there)
Then we went to an RPC model with gitaly and even unwrapped some of the git calls inside of that to speed it up.
Just a few months ago we had a large customer with thousands of devs and a large monorepo ground their deployments to a halt because of a cloning strategy change that introduced an accidental 10x in git calls. Git itself was the bottleneck because it’s not designed for this scale and speed.
For enterprises where thousands of developers are contributing code via git to a centralized system of record, who are firing off 1000s of CI jobs Git is absolutely a bottleneck.
Now with LLM technologies we should easily expect a 5-20x code volume increase on the conservative side. Git is being stretched to its perf limits.
(Source: see my profile)
there’s a familiar saying “Markets can remain irrational longer than you can remain solvent.” i think that applies here as well. everyone (customers) want AI; investors demand it. it may eventually calm down but i’m sure many companies will be left behind and ultimately fade away if they don’t keep up until then.
I don't think anything is going to calm down.
Models will only get better with time, not worse.
Demand will keep raising.
Of course they're not going to get worse. That would be absurd. The rate of progress will slow down though.
> Of course they're not going to get worse. That would be absurd. The rate of progress will slow down though.
It's unlikely, but not totally improbable - Model collapse means that the subsequent models would get worse over time, not better.
I don't think it would be absurd for them to worsen. If LLMs cause discourse to worsen, but also grow and change, then the trainers are in a conundrum of ignoring new training data or losing track of the zeitgeist.
They might get worse for 2 reasons:
1. AI free training sets no longer exist. This might degrade quality, although some claim that it will not.
2. Cost. Right now they are burning a lot of money to convince people it's good. But they might not be able to keep it up forever and need to increase prices (which few will want to pay) or degrade the quality to save money.
The memo also says they're eliminating a lot of middle management tiers which has been a theme for a lot of companies recently. It's also been a theme historically. Really has nothing to do with AI. It's just the classic executive view that they are paying people who sit in meetings and write emails instead of writing code. Blissfully unaware that meetings and emails are how big organizations function.
> Blissfully unaware that meetings and emails are how big organizations function.
I don't know, I've seen more big organizations that have a dysfunctional amount of middle management and "meetings about meetings" than ones that truly benefit from that culture.
I worked for some large corps and they all had one thing in common.
Tons of middle management that makes no decisions what so ever.
Everytime you ask a question, they delegate, until you end up at person 1 again and they just can't decide anything.
It's like they all have decision paralysis.
Your argument doesn't make sense. They literally explained why they are doing it. They are looking to remove who can't or won't keep up with ai. That can be managers but also engineers. That's what most companies right now are doing.
Right but naturally that's not actually why they're doing it. In actuality, it's a layoff - they did not go through and analyze which employees are "keeping up" and which aren't, don't be so naive.
This, like virtually all layoffs, is for economic reasons. Of course you can't say that because that reflects poorly on your growth and makes your investors uneasy and yadda yadda yadda. But what do investors like? Hm? AI!
Oh! Oh!!! This is strategic, you see, so we can use even more AI, yes yes that's right mhm.
> they did not go through and analyze which employees are "keeping up" and which aren't, don't be so naive.
they do on the org level. that's not news for anyone who has worked at upper mgmt level in corporations. rule no.1 is you keep your mouth shut about anything there. and of course it's for economic reasons.. it's a business, not a charity to provide lifelong employment for employees who aren't aligned to mgmt goals. Mgmt tells stories depending on who asks. Levels below execute them (by identifying those who aren't aligned).
And people wonder why there is so much push back against AI. The last thing leadership should do when laying off people is use the term AI. It's the most tone deaf thing you can do.
We don't live in the same world as they do. Saying AI out loud makes line go up, not down. Investors are still eating this shit up, for now at least...