On the plus side, vibe coding disaster remediation looks to be a promising revenue stream in the near future, and I am rubbing my hands together eagerly as I ponder the filthy lucre.
On the plus side, vibe coding disaster remediation looks to be a promising revenue stream in the near future, and I am rubbing my hands together eagerly as I ponder the filthy lucre.
> On the plus side, vibe coding disaster remediation looks to be a promising revenue stream in the near future, and I am rubbing my hands together eagerly as I ponder the filthy lucre.
I don't think it will be; a vibe coder using Gas Town will easily spit out 300k LoC for a MVP TODO application. Can you imagine what it will spit out for anything non-trivial?
How do you even begin to approach remedying that? The only recourse for humans is to offer to rebuild it all using the existing features as a functional spec.
There's a middle ground here that you're not considering (at least in the small amount of text). Vibe coders will spit out a lot of nonsense because they don't have the skills (or choose not) to tweak the output of their agents. A well seasoned developer using tools like Claude Code on such a codebase can remediate a lot more quickly at this point than someone not using any AI. The current best practices are akin to thinking like a mathematician with regards to calculator use, rather than like a student trying to just pass a class. Working in small chunks and understanding the output at every step is the best approach in some situations.
That's very true. The LLM can be an accelerator for the remediator, too, with the value-add coming from "actually knowing what they're doing", much as before.
The f is gas town?
> How do you even begin to approach remedying that? The only recourse for humans is to offer to rebuild it all using the existing features as a functional spec.
There are cases where that will be the appropriate decision. That may not be every case, but it'll be enough cases that there's money to be made.
There will be other cases where just untangling the clusterfuck and coming up with any sense of direction at all, to be implemented however, will be the key deliverable.
I have had several projects that look like this already in the VoIP world, and it's been very gainful. However, my industry probably does not compare fairly to the common denominator of CRUD apps in common tech stacks; some of it is specialised enough that the LLMs drop to GPT-2 type levels of utility (and hallucination! -- that's been particularly lucrative).
Anyway, the problem to be solved in vibe coding remediation often has little to do with the code itself, which we can all agree can be generated in essentially infinite amounts at a pace that is, for all intents and purposes, almost instantaneous. If you are in need vibe coding disaster remediation consulting, it's not because you need to refactor 300,000 lines of slop real quick. That's not going to happen.
The general business problem to be solved is how to make this consumable to the business as a whole, which still moves at the speed of human. I am fond of a metaphor I heard somewhere: you can't just plug a firehose into your house's plumbing and expect a fire hydrant's worth of water pressure out of your kitchen faucet.
In the same way, removing the barriers to writing 300,000 lines isn't the same as removing the barriers to operationalising, adopting and owning 300,000 lines in a way that can be a realistic input into a real-world product or service. I'm not talking about the really airy-fairy appeals to maintainability or reliability one sometimes hears (although, those are very real concerns), but rather, how to get one's arms around the 300,000 lines from a product direction perspective, except by prompting one's way into even more slop.
I think that's where the challenges will be, and if you understand that challenge, especially in industry- and domain-specific ways (always critical for moats), I think there's a brisk livelihood to be made here in the foreseeable future. I make a living from adding deep specialist knowledge to projects executed by people who have no idea what they're doing, and LLMs haven't materially altered that reality in any way. Giving people who have no idea what they're doing a way to express that cluelessness in tremendous amounts of code, quickly, doesn't really solve the problem, although it certainly alters the texture of the problem.
Lastly, it's probably not a great time to be a very middling pure CRUD web app developer. However, has it ever been, outside of SV and certain very select, fortunate corners of the economy? The lack of moat around it was a problem long before LLMs. I, for example, can't imagine making a comfortable living in it outside of SV engineer inflation; it just doesn't pay remotely enough in most other places. Like everything else worth doing, deep specialisation is valuable and, to some extent, insulating. Underappreciated specialist personalities will certainly see a return in a flight-to-quality environment.
>it's probably not a great time to be a very middling pure CRUD web app developer
Businesses don't pay for CRUD apps, businesses pay for apps that solve problems which often involves CRUD to persist their valuable data. This is often within the sometimes very strange and difficult to understand business logic which varies greatly from one business to another. That is what "CRUD app developers" actually do, so dismissing them as though there is zero business logic and only CRUD is doing them, us, a disservice.
> it's probably not a great time to be a very middling pure CRUD web app developer. However, has it ever been, outside of SV and certain very select, fortunate corners of the economy?
Like 80% of jobs outside the USA are either local or outsourced CRUD web applications. Many people live quite well thanks to exchange rates. I wonder what's gonna happen if/when those jobs disappear.
I've read your whole reply and agree with most of it; what I don't agree with (or don't understand) is below:
> If you are in need vibe coding disaster remediation consulting, it's not because you need to refactor 300,000 lines of slop real quick. That's not going to happen.
My experience as a consultant to business is that they only ever bring in consultants when they need a fix and are in a hurry. No client of mine ever phoned me up to say "Hey, there, have you any timeslots next week to advise on the best way to do $FOO?", it's always "Hey there, we need to get out an urgent fix to this crashing/broken system/process - can we chat during your next free slot?".
> Like everything else worth doing, deep specialisation is valuable and, to some extent, insulating.
I dunno about this - depends on the specialisation.
They want a deep specialist in K8? Sure, they'll hire a consultant. Someone very specialist in React? They'll hire a consultant. C++ experts? Consultants again.
Someone with deep knowledge of the insurance industry? Nope - they'll look for a f/timer. Someone with deep knowledge of payment processing? No consultant, they'll get a f/timer.
> My experience as a consultant to business is that they only ever bring in consultants when they need a fix and are in a hurry.
No, that's fair, and I think you're right about that. But refactoring 300,000 lines 'real quick' isn't going to happen, regardless of that. :)
> They want a deep specialist in K8? Sure, they'll hire a consultant. Someone very specialist in React? They'll hire a consultant. C++ experts? Consultants again.
I implicitly had narrow technical specialisations in mind, albeit including ones that intersect with things like "insurance industry workflows".
Do you not fear that future/advanced AI will be able to look at a vibe-coded codebase and make sensible refactors itself?
That's my worry. Might be put off a few years, but still...
But its already the present.
For what I am vibing my normal work process is: build a feature until it works, have decent test coverage, then ask Claude to offer a code critique and propose refactoring ideas. I'd review them and decide which to implement. It is token-heavy but produces good, elegant codebases at scales I am working on for my side projects. I do this for every feature that is completed, and have it maintain design docs that document the software architecture choices made so far. It largely ignores them when vibing very interactively on a new feature, but it does help with the regular refactoring.
In my experience, it doubles the token costs per feature but otherwise it works fine.
I have been programming since I was 7 - 40 years ago. Across all tech stacks, from barebones assembly through enterprise architecture for a large enterprise. I thought I was a decent good coder, programmer and architect. Now, I find the code Claude/Opus 4.5 generates for me to be in general of higher quality then anything I ever made myself.
Mainly because it does things I'd be too tired to do, or never bother because why expand energy on refactoring for something that is perfectly working and not to be further developed.
Btw, its a good teaching tool. Load a codebase or build one, and then have it describe the current software architecture, propose changes and explain their impact and so on.
> I thought I was a decent good coder, programmer and architect. Now, I find the code Claude/Opus 4.5 generates for me to be in general of higher quality then anything I ever made myself.
I have about the same experience as you do and experience using Opus 4.5.
If this is true, you weren’t a very good programmer. There’s much more to code quality than refactoring working code.
> If this is true, you weren’t a very good programmer. There’s much more to code quality than refactoring working code.
Yup, my conclusion exactly.
With that said, most code I have seen in private sector is almost objectively horrible (and certainly subjectively). Code manufactured with the current best tools such as Claude compares favourably. Companies rarely have the patience to pay for well manicured, elegant code. If it sort of works it ships.
The amount of software needed and the amount being written are off many orders of magnitude. It has been that way since software's inception and I don't see it changing anytime soon. AI tools are like having a jr dev to do your grunt work. Soon it will be like a senior dev. Then like a dev team. I would love to have an entire dev team to do my work. It doesn't change the fact that I still have plenty of work for them to do. I'm not worried AI will take my job I will just be doing bigger jobs.
> Do you not fear that future/advanced AI will be able to look at a vibe-coded codebase and make sensible refactors itself?
This is a possibility in very well-trodden areas of tech, where the stack and the application are both banal to the point of being infinitely well-represented in the training.
As far as anything with any kind of moat whatsoever? Here, I'm not too concerned.
I am no longer sure thats the case. I had it chew through a gnarly problem with my own custom webrtc implementation on a esp32 SOC. It did not rely on any existing documentation as this stuff is quite obscure - it relied on me pointing to specs for webrtc, specs for esp32 SDK, and quite some prompting. But it solved the problems I was dreading to solve manually in a matter of a 2hr session. Thats for a hobby project, we are now starting to experiment using this in the enterprise, on obscure and horrible to work with platforms (such as some industry specific salesforce packages). I think claude can work effectively with existing code, specs on things that would never made it to stackoverflow before.
That might be true for WebRTC...
Yes, I immediately see the need for the opposite - perfect, accurate, proven bug free software. As long as there is AI there will be AI slop.
Well, there is no perfect, accurate, proven bug free software even before AI. Maybe the problem is not AI but economical incentives and lack of care.