They will never admit it, but many are scared of losing their jobs.

This threat, while not yet realized, is very real from a strictly economic perspective.

AI or not, any tool that improves productivity can lead to workforce reduction.

Consider this oversimplified example: You own a bakery. You have 10 people making 1,000 loaves of bread per month. Now, you have new semi-automatic ovens that allow you to make the same amount of bread with only 5 people.

You have a choice: fire 5 people, or produce 2,000 loaves per month. But does the city really need that many loaves?

To make matters worse, all your competitors also have the same semi-automatic ovens...

> Consider this oversimplified example: You own a bakery. You have 10 people making 1,000 loaves of bread per month. Now, you have new semi-automatic ovens that allow you to make the same amount of bread with only 5 people.

That is actually the case with a lot of bakeries these days. But the one major difference being,the baker can rely with almost 100% reliability that the form, shape and ingredients used will be exact to the rounding error. Each time. No matter how many times they use the oven. And they don't have to invent strategies on how to "best use the ovens", they don't claim to "vibe-bake" 10x more than what they used to bake before etc... The semi-automated ovens just effing work!

Now show me an LLM that even remotely provides this kind of experience.

"vibe-bake" is maybe the best thing I've heard in a long time. Thank you for that, you made my day!

Gladly - well now don't just sit with this great neologism, push it out, lets make it at least as 20% as popular as the term "microslop" :)

Eh accuracy and reliability is a different topic hashed out many times on HN. This thread is about productivity. I’m a staff engineer and I don’t know a single person not using AI. My senior engineers are estimating 40% gains in productivity.

And every time the issue is side-stepped by chatbot proponents.

Accuracy and reliability are necessary to know real productivity. If you have produced code that doesn't work right, you haven't "produced" anything (except in the economic sense of managing to get someone to pay for it).

For example, if you produce 5x more code at 5% reliability, the net result is a -75% change in productivity (ignoring the overhead costs of detecting said reliability).

Exactly this. High productivity if all you do is generate slop and brainrot videos. If you are going to generate code with it... well how productive was the genius at the AWS who used Kiro to cause that December outage ? 3 years ago that would have been a career-ending choice of productivity tools.

We’ve all been waiting for the reliability shoe to drop for, what, a year now?

It’s only slop of you don’t understand the code, prompt, and result, and skip code reviews. You can have large productivity gains without reducing quality standards.

> It’s only slop of you don’t understand the code, prompt, and result, and skip code reviews. You can have large productivity gains without reducing quality standards.

So essentially like delegating all work to a beginner programmer only 10x more frustrating? Well, that's not what I would classify under "Pocket PhD" or "Nation of PhDs in a datacenter", which is the bullshit propaganda the AI CEOs are relentlessly pushing. We should not have to figure this out for them - they were saying this will write ALL code in 6 months from "now", the last time "now" being January 2026, so in little over 4.5 months. No, we should not be fixing this mess, f*k understanding the prompts and doing the code reviews of the AI slop. Why does it not work as advertised?

I’m not here to defend bs propaganda. I don’t think I’ve seen anyone defend that stuff. I don’t know if you’re shifting goalposts or that’s what you’ve always been worried about.

I’m just saying the productivity gains are real, even in serious production level and life critical systems.

If you are only able to think in binaries, no-AI or phd-AI, that’s a you problem.

> I’m just saying the productivity gains are real, even in serious production level and life critical systems.

Again, neither serious studies (See that METR study on dev productivity), nor the ever increasing rate of major incidents caused by AI support your statement. Not to mention the absolute lack of well known AI-produced products that we know of.

> If you are only able to think in binaries, no-AI or phd-AI, that’s a you problem.

No, you see if I were a CEO of a public ompany and I lied through my teeth to the investors and the general public, about the capabilities of my product, then I would normally go to jail. The CEOs of major AI companies are making claims that do not seem to be confirmed in reality. They have burned several hundred billion dollars so far, in pursuit of "god-level intelligence". What came out instead is "your prompting sucks" or similar level of nonsense.

I am only holding them to the standards they have repeatedly, boldly and insistently set themselves. You should be too.

> METR

Yes I’ve seen it. It was certainly interesting at the time. If you refresh yourself on the study, it admits to reflecting a narrow point in time on a narrow task type and toolset.

Last July most people I know weren’t automating Jira tickets, pull requests, comment addressing, design docs, multi repo research, and customizing rule sets. Now everyone I know does, each of these incrementally speed up productivity.

> Not to mention the absolute lack of well known AI-produced products that we know of.

This is a strange comment. We have a well known example in openclaw, which is notoriously vibe coded, which again if you follow the thread, I’m not defending. whereas I know senior and staff engineers at most FAANG companies and every single one uses AI to code, so many many products you know are being written with AI.

I don’t wanna dox myself but last year my company developed a greenfield product with a pretty large headcount of eng (multiple teams)that was built with an AI first development workflow, now that doesn’t mean the 20 engineers just stood around and twiddling their thumbs. They were doing real engineering and software development work with heavy agentic AI use. They shipped it in six months and it’s been in prod for months. If you can’t see how AI is being used I don’t know what to tell you.

[deleted]

> This is a strange comment. We have a well known example in openclaw, which is notoriously vibe coded, which again if you follow the thread, I’m not defending. whereas I know senior and staff engineers at most FAANG companies and every single one uses AI to code, so many many products you know are being written with AI.

Oh it's a product? What does it do? Leak data and delete inboxes? I would not call that a "product" at least not in the commercial sense.

> I don’t wanna dox myself but last year my company developed a greenfield product with a pretty large headcount of eng (multiple teams)that was built with an AI first development workflow

Yeah you sure are not "doxxing" yourself with this generic statement. I am sure you guys built something with the "AI first" workflow. The point being, based on what he AI CEOs and AI boosters are saying, this should have been a project with one person organising a "fleet of agents" . Why wasn't it? If it still requires a large engineering headcount, what's the point of using the AI?

A bit simplistic. The bakery can just expand its product range or do various other things to add work. In fact that's exactly what I would expect to happen at a tech company, ceteris paribus.

This is what I find interesting - the response from most companies is "we will need fewer engineers because of AI", not "we can build more things because of AI".

What is driving companies to want to get rid of people, rather than do more? Is it just short-term investor-driven thinking?

I think it's an excuse to do needed lay offs without saying as much. So yes, preserving signals, essentially. I've never met a tech company that didn't love expanding work to fill capacity, even if the work is of little value.

How much more productive are we supposed to be in engineering? Are we 10x'ing our testing capability at the same time? QA is already a massive bottleneck at my $DAYJOB. I'm not sure what benefits the company at-large derives from having the typing machine type faster.

Perhaps this is one of the understanding gaps that crop up around AI development? At my current company and most others I've worked at, testing capability is part of the same bucket because engineers do their own QA.

I'm far more interested in understanding how we can 10x our confidence in a change and not just our line counts.

The optimization function of capitalism and it's instrumental convergence. The AI Alignment problem is already here, and it is us.

A market has to exist for this expanded range and for the expanded ranges of every other bakery. Otherwise the bakery's just wasting flour.

Where is this expanded demand coming from?

Two loaves of bread off the same line are perfect substitutes for each other, and compete to be sold.

Lines of code within the same code base aren't competing to be sold. They either complement each other by adding new features, making the actual product sold more valuable, or one replaces another to make a feature more desirable- look better, work faster, etc.

The market grows if you add new features- your bread now doubles as a floatation device- or you introduce a new line of bread with nuts and berries.

So, the business has to decide- does it fire some workers and pocket the difference until someone else undercuts them, or does it keep the workers and grow the market it can sell to faster?

Read the comment I replied to to see where the bread came from.

But on your point (which seems to hinge on wish thinking), this infinity of new features you propose for every product still needs those new markets you take for granted to justify their inclusion in the product. However cornering a new market isn't as straightforward as deploying a new feature - we all wish it was. The tech that makes it trivial for one firm to develop these features, makes it trivial for everyone else to build them. This means any new market will be immediately saturated.

Even if the leap of finding new markets was as easy as you think, you still need to explain why this hypothetical company would keep paying millions in avoidable salaries. Because whatever jobs you assign to AI, it won't be any less available to do the work of the human labor.

Adding new features doesn't necessarily grow the market. Your bread with nuts and berries competes with the regular bread for the customer's money. Other things also compete for the same money, such as medical, daycare, schooling etc. So increasing features won't necessarily grow the market because the market. Even in an optimistic scenario, those features only have a probability of increasing revenue, it's not certain.

OTOH, if you fire those workers, it is a certainty that your bakery gets more cash. You can then use that cash to reward your shareholders (a category that conveniently includes you) via buybacks or dividends.

On another note, if you had 100 engineers and you lay almost all of them off and keep 5 super-AI-accelerated engineers, and your competitor keeps 50 of such engineers, your competitor is still able to iterate 10x as fast. So you still lay people off at the risk of falling behind.

Writing software isn't like a small bakery with fixed demand. There are always more features to build and improvements to do than capacity allows. For better or worse software products are never finished.

I'm starting to think for software it's produce 2,000 loaves per month. I'm realizing now software was supply-constrained and organizations had to be very strategic about what apps/UIs to build. Now everything and anything can be an app and so we can build more targeted frontends for all kinds of business units that would've been overlooked before.

Maybe the bakery expands to make more than just loaves of bread, maybe different cakes, sandwiches, maybe expand delivery to nearby towns.

I don't think it's valid to reduce the act of creating software to an assembly line, especially with Amdahl's law.

[dead]