I'm burning an insane number of tokens 8-12 hours a day for the dramatic improvement of some internal tooling at a big tech company. Using it heavily for an unannounced future project as well.

I presume I'm not the only one.

We suddenly have a proliferation of new internal tools and resources, nearly all of which are barely functional and largely useless with no discernible impact on the overall business trajectory but sure do seem to help come promo time.

Barely an hour goes by without a new 4-page document about something that that everyone is apparently ment to read, digest and respond to, despite its 'author' having done none of those steps, it's starting to feel actively adversarial.

Without good management AI is just a new way to make terrible work in unprecedented quantities.

With good management you will get great work faster.

The distinguishing feature between organisations competing in the AI era is process. AI can automate a lot of the work but the human side owns process. If it’s no good everything collapses. Functional companies become hyper functional while dysfunctional companies will collapse.

Bad ideas used to be warded off by workers who in some shape or form of malicious compliance just would slow down and redirect the work while advocating for better solutions.

That can’t happen as much anymore as your manager or CEO can vibe code stuff and throw it down the pipeline for the workers to fix.

If you have bad processes your company will die, or shrivel or stagnate at best. Companies with good process will beat you.

[flagged]

We had a coworker vibecode an internal tool, do a bunch of marketing to the company at how incredible it is. Then got hired somewhere else.

I just went and deleted it because it's completely broken at every edge case and half of the happy paths too.

My team has also adopted this - it's much easier to add another layer than to refine or simplify what exists. We have AI skills to help us debug microservices that call microservices that have circular dependencies.

This was possible before but someone would maybe notice the insane spaghetti. Now it's just "we'll fix it with another layer of noodles".

That's so interesting because where I work, the push was to "add one more API" to existing services, turning them into near monoliths for the sake of deployment and access. Still a mess of util and helper functions recursively calling each other, but at least it's one binary in one container.

Unfortunately I saw this pre-AI with microservices, where while empowering developers with their beloved microservices, we create intense complexity and deployment headaches. AI will fix the slop with an obscuring layer of complexity on top.

Are you concerned this will just lead to coupling everywhere like microservices tend to do?

My main use of vibecoding is creating dozens of internal tools that have sped up tasks, or made tasks possible that were previously not. These tools would have taken weeks of time to build manually and would have been hard to justify, rather than just struggling with manual processes every now and again. AI has been life-changing in creating these kinda janky tools with janky UI that do everything they're supposed to perfectly, but are ugly as hell.

Are you able to describe any of those internal tools in more detail? How important are they on average? (For example, at a prior job I spent a bit of time creating a slackbot command "/wtf acronym" which would query our company's giant glossary of acronyms and return the definition. It wasn't very popular (read: not very useful/important) but it saved myself some time at least looking things up (saving more time than it took to create I'm sure). I'd expect modern LLMs to be able to recreate it within a few minutes as a one-shot task.)

It's almost always a CRUD app or dashboard that no one uses while being extremely overkill for their use case.

edit: LOL called it, a bunch of useless garbage that no one really cares about but used to justify corporate jobs programs.

Ah but it looks cool and I can put it on my stack ranking perf eval

If it's useless that's a you problem. I've been building CRUDs that would have taken me a month to get perfectly right in the span of 4-5 days which save an enormous number of human tech support hours.

Sorry man but the software world is littered with CRUD apps, they are called CRUD apps for a reason. They're basically the mass assembled stamped L-bracket of the software world. CRUD apps have also had template generators for like 30 years now too.

Still useless in the sense that if you died tomorrow and your app was forgotten in a week the world will still carry on. As it should. Utterly useless in pushing humanity forward but completely competent at creating busy work that does not matter (much like 99% of CRUD apps and dashboards).

But sure yeah, the dashboard for your SMB is amazing.

The software industry's value proposition for the vast majority of businesses running the world lies in CRUD apps that properly capture business requirements. That's infinitely more relevant in insurance, pharma, banking and logistics than any technological breakthrough of the past 25 years.

Your rant just shows you don't understand why people pay for software.

I have one that serves a few functions- Tracks certificates and licenses (you can export certs in any of the majorly requested formats), a dashboard that tells you when licenses and certs are close to expiring, a user count, a notification system for alerts (otherwise it's a mostly buried Teams channel most people miss), a Downtime Tracker that doesn't require people to input easily calculatable fields, a way for teams to reset their service account password and manage permissions, as well as add, remove, switch which project is sponsoring which person, edit points of contact, verify project statuses, and a lot more. It even has some quick charts that pull from our Jira helpdesk queue- charts that people used to run once a week for a meeting are just live now in one place. It also has application statuses and links, and a lot more.

I'd been fighting to make this for two years and kept getting told no. I got claude to make a PoC in a day, then got management support to continue for a couple weeks. It's super beneficial, and targets so many of our pain points that really bog us down.

>> a dashboard that tells you when licenses and certs are close to expiring

Or, Excel > Data > Sort > by the Date column. No dashboard needed, no app needed.

A lot of businesses can get by just fine with making it one person's responsibility to maintain a spreadsheet for this. It can be fragile though as the company grows and/or the number of items increases, and you have to make sure it's all still centralized and teams aren't randomly purchasing licenses or subscriptions without telling anyone, it needs to be properly handed off if the person leaves/dies/takes a vacation, backed up if not using a cloud spreadsheet... I've probably seen at least a dozen startups come and go over the years purporting to solve this kind of problem, other businesses integrate it into an existing Salesforce/other deployment... it seems like a fine choice for an internal tool, so long as the tool is running on infrastructure that is no less stable than a spreadsheet on someone's machine.

In the startup world something like "every emailed spreadsheet is a business" used to be a motivating phrase, it must be more rough out there when LLMs can business-ify so many spreadsheet processes (whether it's necessary for the business yet or not). And of course with this sort of tool in particular, more eyes seeing "we're paying $x/mo for this service?" naturally leads to "can't we just use our $y/mo LLM to make our own version?". Not sure I'd want to be in small-time b2b right now.

Why are you ignoring the fact that grabbing data from heterogeneous sources, combining it and presenting it is generally never a trivial task? This is exactly what LLMs are good for.

If you are using an LLM to actually fetch that data, combine it, and present it to you in an ad hoc way (like you run the same prompt every month or something), I wouldn't trust that at all. It still hallucinates, invents things and takes short cuts too often.

If you are using an LLM to create an application to grab data from heterogeneous sources, combine it and present it, that is much better, but could also basically be the excel spreadsheet they are describing.

The ones I can mention.. one that watches a specific web site until an offer that is listed expires and then clicks renew (happens about once a day, but there is no automated way in the system to do it and having the app do it saves it being unlisted for hours and saves someone logging in to do it). Several that download specific combinations of documents from several different portals, where the user would just suck it up previously and right-click on each one to save it (this has a bunch of heuristics because it really required a human before to determine which links to click and in what order, but Claude was able to determine a solid algo for it). Another one that opens PDFs and pulls the titles and dates from the first page of the documents, which again was just done manually before, but now sends the docs via Gemma4 free API on Google to extract the data (the docs are a mess of thousands of different layouts).

None of these projects sound like weeks worth of scope w/o AI.

We’re seeing the exact same where I work. Our main Slack channels have become inundated with “new tool announcements!”, multiple per day, often solving duplicate problems or problems that don’t exist. We’ve had to stop using those channels for any real conversation because most people are muting them due to the slop noise.

And what’s worse is that when someone does build a decent tool, you can’t help but be skeptical because of all the absolute slop that has come out. And everyone thinks their slop doesn’t stink, so you can’t take them at their word when they say it doesn’t. Even in this thread, how are you to know who is talking about building something useful vs something they think is useful?

A lot of people that have always wanted to be developers but didn’t have the skills are now empowered to go and build… things. But AI hasn’t equipped them with the skill of understanding if it actually makes sense to build a thing, or how to maintain it, or how to evolve it, or how to integrate it with other tools. And then they get upset when you tell them their tool isn’t the best thing since sliced bread. It’s exhausting, and I think we’ve yet to see the true consequences of the slop firehose.

I'm sorry to hear that you have people abusing their new superpowers.

I run a team and am spending my time/tokens on serious pain points.

Such as?

I'll throw this out as something where it has saved literally weeks of work: debugging pathological behaviour in third-party code. Prompt example: "Today, when I did U, V, and W. I ended up with X happening. I fixed it by doing Y. The second time I tried, Z happened instead (which was the expected behaviour). Can you work out a plausible explanation for why X happened the first time and why Y fixed it? Please keep track of the specific lines of code where the behaviour difference shows up."

This is in a real-time stateful system, not a system where I'd necessarily expect the exact same thing to happen every time. I just wanted to understand why it behaved differently because there wasn't any obvious reason, to me, why it would.

The explanation it came back with was pretty wild. It essentially boiled down to a module not being adequately initialized before it was used the first time and then it maintained its state from then on out. The narrative touched a lot of code, and the source references it provided did an excellent job of walking me through the narrative. I independently validated the explanation using some telemetry data that the LLM didn't have access to. It was correct. This would have taken me a very long time to work out by hand.

Edit: I have done this multiple times and have been blown away each time.

> Prompt example: "Today, when I did U, V, and W. I ended up with X happening. I fixed it by doing Y. The second time I tried, Z happened instead (which was the expected behaviour). Can you work out a plausible explanation for why X happened the first time and why Y fixed it? Please keep track of the specific lines of code where the behaviour difference shows up."

> The explanation it came back with was pretty wild. It essentially boiled down to a module not being adequately initialized before it was used the first time and then it maintained its state from then on out.

Even without knowing any of the variable values, that explanation doesn't sound wild at all to me. It sounds in fact entirely plausible, and very much like what I'd expect the right answer to sound like.

This seems to be a common denominator for what LLMs actually do well: Finding bugs and explaining code. Anything about producing code is still a success to be seen.

I answered this in a different comment below, but a lot of the friction is around the amount of time it takes to test/review/submit etc, and a lot of this is centered around tooling that no one has had the time to improve, perf problems in clunky processes that have been around longer than anyone individual, and other things of this nature. Addressing these issues is now approachable and doable in one's "spare time".

The point of that friction is to keep the human in the loop wrt code quality, it's not meant to be meaningless busywork. It's difficult to believe that you sustain the benefit of those systems. Anthropic and Microsoft publicly failed to keep up code quality. They would probably be in a better spot currently if they used neither, no friction, no AI. But that friction exists for a reason and AI doesn't have the "context length" to benefit from it.

This the the difference between intentional and incidental friction, if your CI/CD pipeline is bad it should be improved not sidestepped. The first step in large projects is paving over the lower layer so that all that incidental friction, the kind AI can help with, is removed. If you are constantly going outside that paved area, sure AI will help, but not with the success of the project which is more contingent on the fact that you've failed to lay the groundwork correctly.

For me/my team, I use it to fix DevProd pain points that I would otherwise never get the investment to go solve. Just removed Webpack for Rspack, for example. Could easily do it myself, which is why I can prompt it correctly and review the output properly, but I can let it run while I’m in meetings over more important product or architectural decisions

Creating stakeholder value

Promoting synergy

Creating productivity gain narrtives

Aligning stakeholders

Performing the blood ritual

Eating a bagel

>Such as?

it's crazy that the experiences are still so wildly varying that we get people that use this strategy as a 'valid' gotcha.

AI works for the vast majority of nowhere-near-the-edge CS work -- you know, all the stuff the majority of people have to do every day.

I don't touch any kind of SQL manually anymore. I don't touch iptables or UFW. I don't touch polkit, dbus, or any other human-hostile IPC anymore. I don't write cron jobs, or system unit files. I query for documentation rather than slogging through a stupid web wiki or equivalent. a decent LLM model does it all with fairly easy 5-10 word prompts.

ever do real work with a mic and speech-to-text? It's 50x'd by LLM support. Gone are the days of saying "H T T P COLON FORWARD SLASH FORWARD SLASH W W W".

this isn't some untested frontier land anymore. People that embrace it find it really empowering except on the edges, and even those state-of-the-art edge people are using it to do the crap work.

This whole "Yeah, well let me see the proof!" ostrich-head-in-the-sand thing works about as long as it takes for everyone to make you eat their dust.

People ask for examples because they want to know what other people are doing. Everything you mention here is VERY reasonable. It's exactly the kind of stuff no one is going to be surprised that you are getting good results with the current AI. But none of that is particularly groundbreaking.

I'm not trying to marginalize your or anyone else's usage of AI. The reason people are saying "such as" is to gauge where the value lies. The US GDP is around 30T. Right now there's is something like ~12T reasonably involved in the current AI economy. That's massive company valuations, data center and infrastructure build out a lot of it is underpinning and heavily influencing traditional sectors of the economy that have a real risk of being going down the wrong path.

So the question isn't what can AI do, it can do a lot, even very cheap models can handle most of what you have listed. The real question is what can the cutting edge state of the art models do so much better that is productively value added to justify such a massive economic presence.

That's all well and good, but what happens when the price to run these AIs goes up 10x or even 100x.

It's the same model as Uber, and I can't afford Uber most of the time anymore. It's become cost prohibitive just to take a short ride, but it used to cost like $7.

It's all fun and games until someone has to pay the bill, and these companies are losing many billions of dollars with no end in sight for the losses.

I doubt the tech and costs for the tech will improve fast enough to stop the flood of money going out, and I doubt people are going to want to pay what it really costs. That $200/month plan might not look so good when it's $2000/month, or more.

Why not try it yourself? Inference providers like BaseTen and AWS Bedrock have perfectly capable open source models as well as some licensed closed source models they host.

You can use "API-style" pricing on these providers which is more transparent to costs. It's very likely to end up more than 200 a month, but the question is, are you going to see more than that in value?

For me, the answer is yes.

What makes you think I haven't tried it myself?

The "costs" are subsidized, it's a loss-leader.

It's an important concern for those footing the bill, but I expect companies really in the face of being impacted by it to be able to do a cost-benefit calculation and use a mix of models. For the sorts of things GP described (iptables whatever, recalling how to scan open ports on the network, the sorts of things you usually could answer for yourself with 10-600 seconds in a manpage / help text / google search / stack overflow thread), local/open-weight models are already good enough and fast enough on a lot of commodity hardware to suffice. Whereas now companies might say just offload such queries to the frontier $200/mo plan because why not, tokens are plentiful and it's already being paid for, if in the future it goes to $2000/mo with more limited tokens, you might save them for the actual important or latency-sensitive work and use lower-cost local models for simpler stuff. That lower-cost might involve a $2000 GPU to be really usable, but it pays for itself shortly by comparison. To use your Uber analogy, people might have used it to get to downtown and the airport, but now it's way more expensive, so they'll take a bus or walk or drive downtown instead -- but the airport trip, even though it's more expensive than it used to be, is still attractive in the face of competing alternatives like taxis/long term parking.

None of that is concrete though; it's all alleged speed-ups with no discernable (though a lot of claimed) impact.

> This whole "Yeah, well let me see the proof!" ostrich-head-in-the-sand thing works about as long as it takes for everyone to make you eat their dust.

People will stop asking for the proof when the dust-eating commences.

> but sure do seem to help come promo time.

I personally noticed this. The speed at which development was happening at one gig I had was impossible to keep up with without agentic development, and serious review wasn't really possibile because there wasn't really even time to learn the codebase. Had a huge stack of rules and MCPs to leverage that kinda kept things on the rails and apps were coming out but like, for why? It was like we were all just abandoning the idea of good code and caring about the user and just trying to close tickets and keep management/the client happy, I'm not sure if anyone anywhere on the line was measuring real world outcomes. Apparently the client was thrilled.

It felt like... You know that story where two economists pass each other fifty bucks back and forth and in doing so skyrocket the local GDP? Felt like that.

That's not on Claude, that's on the authors.

Claude is a tool. It can be abused, or used in a sloppy way. But it can also be used rigorously.

I've been beating my team to be more papercut-free in the tooling they develop and it's been rough mostly because of the velocity.

But overall it's a huge net positive.

>Barely an hour goes by without a new 4-page document about something that that everyone is apparently ment to read, digest and respond to, despite its 'author' having done none of those steps, it's starting to feel actively adversarial.

well, isn't that what AI can be used effectively for - to generate [auto]response to the AI generated content.

What a delightful world we're building.

Sounds like a workplace wide DDoS.

Im convinced none of these people have any training in corporate finance. For if they did they'd realise they were wasting money.

I guess you gotta look busy. But the stick will come when the shareholders look at the income statement and ask... So I see an increase in operating expenses. Let me go calculate the ROIC. Hm its lower, what to do? Oh I know, lets fire the people who caused this (it wont be the C-Suite or management who takes the fall) lmao.

Do you really think companies have started spending millions on tokens and no one from finance has been involved?

You could argue that all the spending is wasted (doubtless some is), but insisting that the decision is being made in complete ignorance of financial concerns reeks of that “everyone’s dumb but me” energy.

There is a difference to just noticing and attributing it to and recognizing negative financial outcomes. Right now for most companies they are still adjusting to declining inflation. Their bottom lines are doing quite well because consumer price inflation is much stickier than supply inflation. We are coming off of one of the quickest and largest supply lead inflationary cycles. It may not be immediately apparent for many companies that new expenditures are a drag on profitability.

The real thing to look at is whether or not the future outlook for company AI spend is heading up or down?

What a finance team allocates on spend has nothing to do with what the tokens actually get used for.

Are they peeking over the shoulder of each team and individual? Of course not.

It can be the case that the spend is absolutely wasteful. Numbers don’t lie.

> Do you really think companies have started spending millions on tokens and no one from finance has been involved?

Oh, they were involved all right. They ran their analyses and realized that the increase in Acme Corp's share price from becoming "AI-enabled" will pay for the tokens several times over. For today. They plan to be retired before tomorrow.

That magic trick only works for publicly traded stocks.

Most firms are not a google or a Microsoft - a firms cash balance can become a strategic weapon in the right environment. So wasting money is not a great idea. Lest we forget dividends.

Moreover if you have a budget set re. Spend on tokens - you have rationing. Therefore the firm should be trying to get the most out of token spend. If you are wasting tokens on stuff that doesn’t create a benefit financially for the firm then indeed it is not inline with proper corporate financial theory.

No, it works for any VC-backed companies. Something like 60% of VC funding last year went to AI companies. VCs aren't going to give you a money unless you're building an agentic AI-native agent platform for agents.

No Employees of publicly traded firms benefit from short-term gains in the stock price, assuming the stock price jump holds throughout the period of grant/vesting.

People who work at VC-backed firms do not get to enjoy the same degree of liquidity, not even close. There can be some outliers but that is 0.1% of all.

Can't believe simple stuff like this has to be said.

CFOs or VPs absolutely benefit by hyping their company up to private investors by allowing tokenmaxxing to go on unchecked. Tender offers, acquisitions, and aquihires all exist. Or just good old fashioned resume padding by saying you "enabled AI transformation" or whatever helps you land a big payday at some other company.

Sounds like they did train in corporate finance.

Sounds like you haven’t had training in corporate finance.

More that there is a poor incentive structure. Just like how PE can make money by leveraged buyouts and running businesses into the ground. Many of the financial instruments that make both that and the current AI bubble possible were legal then made illegal within the lifetimes of the last 16 presidents.

Round-tripping used to be regulated. SPVs used to be regulated. If you need a loan you used to have to go to something called a bank, now it comes from ???? who knows drug cartels, child traffickers, blackstone, russians & chinese oligarchs. Even assuming it doesn't collapse tommorow why should they make double digit returns on AI datacenters built on the backs of Americans?

My issue was not with criticism of the money being spent or how it’s being obtained. I was specifically commenting on this statement:

> “Im convinced none of these people have any training in corporate finance. For if they did they'd realise they were wasting money.”

This isn’t meaningful criticism. This is a vacuous “those guys are so dumb”.

I'm sorry to hear you have such poor leadership.

AI is truly perfect for internal tooling. Security is less or no concern, bugs are more acceptable, performance / scalability rarely a concern. Quickest way to get things done, and speed up production development, MVP development etc.

> Security is less or no concern

[waits for chickens to come home to roost]

If security was the prime concern, there would be no chickens and no coop and no farm - people would still be living in caves. After all, outside is dangerous, and Grug Chief said, smart ass grugs with their smart ass ideas like fire or agriculture just invite complexity and create security vulnerabilities.

After all (Grug Chief reminds us), the only truly secure computing system is an inert rock.

> [waits for chickens to come home to roost]

"We are writing down X billions over 4 years, and have cancel several ambitious programs related to our AI experiments. We were following standard practice in the industry, so [shareholders] can't blame us for these chickens coming to roost. If everyone is guilty, is anyone really guilty?"

Doesn't take long until someone has the bright idea to pipe customer tickets directly into the poorly written internal tool

No problems at all except, unauthorized access to a model they were claiming was a weapon and couldn't be released to the public and having their cli code leaked in the last two weeks. Everything's just fine

When attackers can move laterally through everything because every internal tool leaks credentials and data there will be issues.

Internal tool Doesn’t have credential. Checkmate ;)

Anthropic seems to be doing fine :)

This comment makes me want to scream.

This is what happens when entire industries go all in on "Move fast and break things." Imagine what they said about software applying to everything else in the world. That's what's coming.

> Security is less or no concern, bugs are more acceptable, performance / scalability rarely a concern. Quickest way to get things done

> This is what happens when entire industries go all in on "Move fast and break things." Imagine what they said about software applying to everything else in the world. That's what's coming.

This is literally how rest of the world works already, and always had. We'd still be living in caves otherwise. Fortunately most people (at least outside software) seem to understand that security is a trade-off against usefulness, and not an end goal in itself.

This is not going away.

Even right now the difference with working with 'AI native' developers or with regular developers is day and night.

I certainly wouldn't want a non-clause enabled developer on my team now.

> I certainly wouldn't want a non-clause enabled developer on my team now.

You only want to work with people who are hip with the North Pole?

Typo obviously :-)

[deleted]

I am, oddly, able to get really quite a lot of mileage out of $20/mo of OpenAI plan, and I have never encountered a usage limit. I have gotten warnings that I was close a couple times.

I wonder what I’m doing differently.

I did spend quite a bit of time, mostly manually, improving development processes such that the agent could effectively check its work. This made a difference between the agent mostly not working and mostly working. Maybe if I had instead spent gobs of money it would have worked output tooling improvements?

I wonder if you're like me? I tried out the MCPs and sub agents and rules and bells and whistles and always just came back to a plain Codex / Claude Code / Cursor Agent terminal window, where I say what I want, @ a few files, let it rip, check the diff, ask for some adjustments, then commit and start the process over after clearing context.

Haven't found a process that beats this yet and I burn very few tokens this way.

I don’t really write code with it at all, and that’s why I burn so many tokens.

I like writing code, I’m good at writing code. What I hate doing is dredging through logs, filtering out test scenarios and putting together disparate information from knowledge silos - so I have the AI doing that. It’s my research assistant.

Effectively I’m using it like an automated search engine that indexes anything I want and refines the results by using the statistical near neighbors of how other people explained their searches.

I'd be interested to learn what kind of internal tooling are you improving ?

We've had a lot of complaints about our review processes, time to submit, etc, and a lot of that boils down to tools no one has time to improve.

It's now trivial to fix these problems while still doing our day jobs -- shipping a product.

Personally, a static analysis PR check to catch some types of preventable runtime production errors in application code

I’m not them but we have vastly improved our internal pipeline monitoring/triage/root cause/etc by having a new system that basically its whole purpose is to hook into all of our other systems and consolidate it under a single view with an emphasis on shortening the amount of time it takes to triage and refine issues.

This will have previously been too ambitious to ever scope but we’ve been able to build essentially all of it in just two months. Since it sits on top of our other systems and acts as more of a window/pass through control pane, the fact that it’s vibe coded poses little risk since we still have all the existing infrastructure under it if something goes awry.

Same and it is working really well (I say contra to most individual reporting).

I have some coworker who says something similar, he vibe coded tons of cryptic code, which indeed solves some problem though could be way more compact and well structured. Now it is hitting complexity limitation, since llm now cant comprehend it, and human cant comprehend it by large a margin.

I went through one the other day which was a nest of Go code which boiled down to a 10 line shell script.

Just wait a month, Opus 4.8 will comprehend it for sure.

it will comprehend it well enough to complicate it further into a rats-nest that only Opus 4.9 can comprehend, and so on. Good luck if you run into a bug before the N+1 version launches.

honest recommendation: nuke and pave after analyzing (w/ AI of course) where it went horribly wrong.

it's trivial to reimplement a better solution.

Its a bit of workspace politics, I would need to call that guy out to tell that he is not hyper-performer, but just pushed lots of low quality code which will produce lots of negative impact in a long term.

Also, I am not sure if it is trivial to implement. The code is injected into many scenarios and workflows, so replacement will be painful and risky if new solution break some edge case.

It sounds like you might have some larger process problems if someone can just inject a bunch of vibe-coded slop into critical workflows while more discerning eyes are dubious of the quality/reliability etc.

In some sense, sure. There’s a lot of processes that weren’t previously needed, because sloppy people who couldn’t or wouldn’t think things through were mostly incapable of producing PRs that passed all the existing tests.

its partially/largely management problem. One of tier1 productivity metric in the group is # of LoC created by engineers, so it creates dynamics of people exchanging favors of pushing AI slop to codebase, or be labeled as low performers.

The problem was definitely because they didn't use enough AI fast enough. They should just try again

I guess that's one way to tout a technology as revolutionary without actually needing to provide any proof of it. Just say you're using it for "internal tooling" and "unannounced projects", that way nobody can look at them and notice they're indistinguishable from the slop that clogs up Show HN nowadays.

It's better than the "here's my code, it a giant pile of spaghetti but only luddites care about code quality and maintainability anyway" method, at least.

I'm using it to write frontend code literally 5 times faster. What would have been a shell script is now a GUI backed by an API layer that doesn't require looking up internal documentation to know that it exists.

I've been using it to write tools that drastically facilitate spinning up local k8s cluster with an entire suite of development services that used to take two days to set up in Docker.