The quoted revenue numbers seem insane, but I guess it's the result of corporate deals where every developer seat is hundreds of dollars a month?
My job has been publicly promoting who's on top of the "AI use dashboard" while our whole product falls apart. Surely this house of cards has to collapse at some point, better get public money before it does.
I wish there was some sort of community project where engineers could whistleblow about their product falling apart through misguided AI pushes.
I see it everywhere in my private circles, I'm not sure the story is truly reaching the big public.
I've gone through many many fads and smoke during my career, but this is the first time I'm actually worried about things falling apart.
>I wish there was some sort of community project where engineers could whistleblow about their product falling apart through misguided AI pushes.
It would be an awesome thing to see. But would need to be hosted in another country like PirateBay
Also, what is their incentive?
Yeah, it is wild seeing with my eyes how bad these tools are in a lot of cases. We do have some vibe coders on our team but they basically are banned from my current project because they completely ruin the design and nuke throughput. HN would have me believe I'm a Luddite who shouldn't be writing code, however. I truly do not understand how to reconcile this experience and many times it is too complicated a topic to explain to someone who isn't an engineer. AI is the uiltmate Dunning-Kruger machine. You cannot fix what you do not know because you do not know that you did not know.
As you say, I think things are just going to fall apart and we're just going to have to learn the hard way.
No, these tools are really great in a lot of cases. But they still don't have general intelligence or true understanding of anything - so if people using them wrong and rely on their output because it looks good and not because they verified it, then this is on the people using them.
I mean, that is fine, but then it seems like people at large are not using them "right". I think you'll find that since these tools are convenient and produce a lot of code in terms of lines, that verifying goes out the window. Due diligence was hard before these tools existed.
Oh I do find it certainly tempting to get lazy with these tools, but I did learn that there are sideprojects, where vibecoding is fine - and important codebase, that can be improved with LLM's - but not if you just let agents loose on them.
fatbabies from the dot com days
At least I’m not alone.
My company has a vibe coded leaderboard tracking AI usage.
Our token usage and number of lines changed will affect our performance review this year.
I have started using the most token-intensive model I can find and asking for complicated tasks (rewrite this large codebase, review the resulting code, etc.)
The agent will churn in a loop for a good 15-20 minutes and make the leaderboard number go up. The result is verbose and useless but it satisfies the metrics from leadership.
Congrats on becoming AI native
How much do you think that's costing?
> Our token usage and number of lines changed will affect our performance review this year.
I'm going nuts, because as I was "growing up" as a programmer (that was 20+ years ago) it was stuff like this [1] that made me (and people like me) proud to be called a computer programmer. Copy-pasting it in here, for future reference, and because things have turned out so bleak:
> They devised a form that each engineer was required to submit every Friday, which included a field for the number of lines of code that were written that week. (...)
> Bill Atkinson, the author of Quickdraw and the main user interface designer, who was by far the most important Lisa implementer, thought that lines of code was a silly measure of software productivity. He thought his goal was to write as small and fast a program as possible, and that the lines of code metric only encouraged writing sloppy, bloated, broken code. (...)
> He was just putting the finishing touches on the optimization when it was time to fill out the management form for the first time. When he got to the lines of code part, he thought about it for a second, and then wrote in the number: -2000.
[1] https://www.folklore.org/Negative_2000_Lines_Of_Code.html
This is insane.
> Our token usage and number of lines changed will affect our performance review this year.
The AI-era equivalent of that old Dilbert strip about rewarding developers directly for fixing bugs ("I'm gonna write me a new mini-van this afternoon!") just substitute intentional bug creation with setting up a simple agent loop to burn tokens on random unnecessary refactoring.
Could you both name and shame?
Name pretty much any company. Every one of my friends have said their company is doing this. Across 3 countries mind you. Especially if they already use microsoft office suite. Those folks got sold copilot on a deal it seems.
I work for a mega corp, and our global overlord( who is ex dev) has tried Claude code at home, and figured out that generating large amounts of code comes with its own challenges - they explicitly don’t want this to happen so there’s no such metric.
Opposite. Everyone of my friend's companies don't do this. They all work at smaller companies though, which I bet is the difference.
I work at a smaller company that does this.
Weird. I would have thought most smaller companies would not need this sort of useless metric where people know each other and know what they are doing. These things are generally the domain of larger companies where they have already dehumanized their employees and deal only with numbers.
I feel like a crazy person, especially when I read HN. Half or more of the comments on this thread are saying how the game is over for even writing code. Then at my job, I see people break things at a rate I can't personally keep up with. Worse, I hear more and more colleagues talk about mandated AI tooling usage and massive regression rates. My company isn't there yet, but I feel it is around the corner.
I mean, they claim they've got 15B consumer revenue and 900M weekly active users.
If that's accurate, that means what, like 11% of the human population is using their product, and the average user pays $15?
That seems incredibly high, especially for poorer countries.
Still, I do know that if I go to a random cafe in the developed world and peep at people's screens, I'm very likely to see a ChatGPT window open, even on wildly non-technical people's screens.