I think AI rescue consulting is going to be come a significant mode of high value consulting, similar to specialists who come in to try and deal with a security breach or do data recovery.
Purely AI written systems will scale to a point of complexity that no human can ever understand and the defect close rate will taper down and the token burn per defect rate scale up and eventually AI changes will cause on average more defects than they close and the whole system will be unstable. It will become a special kind of process to clean room out such a mess and rebuild it fresh (probably still with AI) after distilling out core design principles to avoid catastrophic breakdown.
Somewhere in the future, the new software engineering will be primarily about principles to avoid this in the first, place but it will take us 20 years to learn them, just like original software eng took a lot longer than expected to reach a stable set of design principles (and people still argue about them!).
A non-technical friend of mine has just won some hospital contracts after vibecoding w/ Claude an inventory management solution for them. They gave him access to IT dept servers and he called me extremely lost on how to deploy (cant connect Claude to them) and also frustrated because the app has some sort of interesting data/state issues.
What concerns me about this is that as these stories multiply and circulate people will just completely stop buying software/SAAS from startups, because 90% or more will be this same thing. It will completely kill the market.
Oracle have routinely had multimillion pound contract failures and people keep buying from them. Big vendors are too big to fail.
Those are custom software or heavily customized implementations of ERP and similar systems for very large organizations. I’m talking more about the SMB market where today it’s possible for a small team to carve out a niche and make a nice living or even bootstrap a venture that competes with a large player that has poor UX or antiquated feature designs.
The reason Oracle can continue failing at those massive projects is simple: everyone fails at them routinely and often it’s the customers fault.
I used to gripe about various ERP companies but after having dealt with enough, yeah, that's just what the world of ERP systems is like. You will spend your time even with the best of them desiring to scream endlessly at everyone who works there. And they also know your pain but are powerless to help.
Same with Deloitte
no one's getting fired for hiring either one.
> It will completely kill the market.
it will kill all the people in that hospital too
What is this, Humanitarian News?
The real Hackers were the ones actually trying to minimize suffering all along. Not reproduce it at scale.
But the Torment Nexus is such an interesting technical challenge! and I don’t personally torment people: I just move protobufs around! - Software Engineer #1 and #2 excuses
thankyou
Yeah but only one of those actually puts those responsible in prison https://en.wikipedia.org/wiki/Elizabeth_Holmes
> On January 3, 2022, the jury found Holmes guilty on four of the seven counts related to defrauding investors: three counts of wire fraud, and one of conspiracy to commit wire fraud. She was found not guilty on four counts related to defrauding patients
I mean, the stories about how stuff was getting built in the late 90s/early 2000s aren’t much worse.
[flagged]
Or you end up with a certification process, which will of course introduce it's own problems but startups doing things the right way and not just "moveing fast and breaking things" can thrive.
As a SWE that has only ever worked for an employer or on his own projects, this makes me wonder: how would someone even get such a contract? Did this person already have a consulting business? Do you just call up random hospitals and ask if you can demo an inventory management system for them? Did this person already know people at the hospital? I know technical folks that do independent consulting, but even with a vibecoded product, how is it that anyone can just get such a contract?
Frictional money.
People really have a misconception about the sums of money that companies operate on on a regular basis. If you are a people person and know essentially how to sell yourself, you can "scrape" money on the fact that nobody is going to look or think too hard about some contract that represents a tiny fraction of the years budget.
This hospital will learn some hard lessons. I hope their backup strategy is good. I'm surprised they can field software from an entity that isn't SOC2 & HIPAA certified.
No worries! At worst, the contractor can just tell Claude to make sure the hospital knows they're appropriately certified. And the hospital can use Claude to make sure the certs are valid. Everybody wins, except the ones who end up dead. Or with their health destroyed.
> from an entity that isn't SOC2 & HIPAA certified
What do you think the fake Delve attestation scandal was about? https://news.ycombinator.com/item?id=47444319
As a cybersecurity IR professional as much as I hate to see this happen to a hospital this kind of thing is responsible for essentially tripling my income over the last 3 years.
Have you tried to talk him out of it, and have you considered blowing the whistle on him? He could kill people!
Wow. This is like every other gold rush. Millions will walk into the ice and snow, somehow not questioning that their ability to dig is not unique.
Well, selling shovels has always been a good way to deal with that problem
The shovel sellers are ringing the cash register.
This is going to happen all over. Company I'm currently contracting with has gone AI everything (aka technical debt hell), and they're gonna suffer for it. I'm glad my consulting contract ends in 2 months. I don't want to be around for the crash
Don't help him. Let him figure it out by himself, else they (he and hospital) will never learn.
A hospital could not learn a bigger lesson from this person than their existing big players.
(Screams in "deployed in 2026 a new product that only works in internet explorer" in healthcare).
I work at a university and we still have some workstations that need IE as well, for a healthcare vendor app that needs ActiveX. Up until recently we even had some machines running Windows 7.
I don't have time for that. I just told him he needs to hire somebody
Or, "help" by asking questions, or otherwise by sharing an AI review/analysis/suggestions, since they're into that kind of thing.
Definitely cleaning up other people's AI mess for them for free is not a good use of time.
I'd really like to know how he won contracts, just in general. Did he have some connections. And he doesn't even know how to get it to run on a server by himself? There's millions of people that can do that, if he can win contracts why worry about vibe coding at all, just hire someone to do it. Winning contracts is the challenge in my view.
I hope you have quoted him a very very high hourly rate.
Did he lie about HIPAA compliance?
Heaven help us.
jfc lmao
Heh. Got a customer recently around this. Entire infrastructure and CI/CD vibecoded. They half implemented Kubernetes in Github Actions that were several thousand lines long and impossible to understand.
I think the problem will get worst. I dislike the marketing around AI, but I do think it is a useful tool to help those who have experience move faster. If you are not an expert, AI seems to create a complex solution to whatever it is you were trying to do.
> If you are not an expert, AI seems to create a complex solution to whatever it is you were trying to do.
I've been watching non-developers vibe code stuff, and the general failure mode seems to be ignorance of 3-pick-2 tradeoffs.
They'll spam "make it more reliable" or some such, and AI will best-effort add more intermediary redis caches or similar patterns.
But because the vibe coders don't actually know what a redis cache is or how it works, they'll never make the architectural trade-offs to truly fix things.
I’ve noticed something similar with vibecoded game rendering logic submitted by peers. Sometimes it will be peppered with extraneous checks for nullptr, or early returns on textures that have zero size.
I often wonder if it’s the statistical nature of the LLM mixed with a request in the prompt.
AI LOVES defensive coding. I asked you for code to filter and reduce an array. I didn't ask you for a method that makes sure the array exists and is an array before it does anything else.
Reminds me of the quote in the original Westworld movie:
“ These are highly complicated pieces of equipment… almost as complicated as living organisms.
In some cases, they’ve been designed by other computers.
We don’t know exactly how they work.”
Now how did that work out ;-)
However Michael Crichton imagined it would.
I guess that “well” wouldn’t have sold many books.
Shelve it with the Jurassic Park version where John Hammond builds a safe, profitable theme park, and The Andromeda Strain that gives people the sniffles.
That depends. If this equipment is part of the plot, you're right. If it's part of the premise of the world, "well" would be the expectation.
This might not pan out to be the glorious victory of human craft as you’re imagining it to be.
Here’s a slightly different future - these AI rescue consultants are bots too, just trained for this purpose.
Plausible?
I have already experienced claude 4.7 handle pretty complex refactors without issues. Scale and correctness aren’t even 1% of the issue it was last year. You just have to get the high level design right, or explicitly ask it critique your design before building it.
> You just have to get the high level design right, or explicitly ask it critique your design before building it.
Do you think people are not giving their agents specs and asking for input?
The ones who end up with messes, no
Very often, no.
Maybe the professional devs, but not the vibecoders
A thing I've noticed is that everyone thinks they prompt better than the next guy.
This. I have this buddy, who is not an idiot by stretch of the imagination and more adventurous than me in some ways ( I don't really run agents on my machine ), but when I was looking at his prompts, I sometimes question how he gets anything done at all. It is vague and angry demands.
And the bots training the bots are just bots that were trained to train bots?
Nothing that sexy, just thirty odd years of software engineering data from humans.
Commits, design reviews, whitepapers, code reviews, test suites. And pretty concerning : chat logs and even keystrokes from employees nowadays.
The way we train specialized bots now is incredibly inefficient, that part is rapidly improving.
One AI can't vibe code out of the mess, so you'd make another AI trained on getting out of vibe coded messes?
That's serious levels of circular thinking right there.
This is literally how training humans have worked for thousands of years.
We train humans to do things untrained humans can not do.
I think that will happen. I think several things can be true at the same time:
- AI Hype
- AI Psychosis
- AI keeps getting better and better until it can work around big AI slop code bases
> AI keeps getting better and better until it can work around big AI slop code bases
The belief in this is a form of AI psychosis, I think.
Maybe in the future but certainly no evidence of this anytime soon
There are untold billions of dollars to be had if you can make this future come to pass. You don't need AGI to make it happen either. You just need to keep making the context windows bigger and keep coming up with updated training data. It's not the outcome I want, but it really does feel within reach. The only limiting factor is going to be token count and cost to process/generate those tokens. But if you don't particularly care about quality, costs are going to have to go up by several orders of magnitude before you start to regret firing your software engineers.
I don't know what happens in a decade when there are no junior engineers, skilled senior engineers are becoming rare, and the only data left the train LLMs on is 200th-generation slop. But AI slop being qualitatively slop is not enough of a obstacle to prevent that future from coming to pass. And billions of dollars will be "saved" along the way.
> Maybe in the future but certainly no evidence of this anytime soon
Here's some anecdotal evidence from me - I cleaned up multiple GPT 4.x era vibecoded projects recently with the latest claude model and integrated one of those into a fairly large open source codebase.
This is something AI completely failed at last year.
Maybe you should try something like this or listen to success stories before claiming 'certainly no evidence' in future?
No evidence? Chatgpt came out 3 years ago. You basically just need to stick a ruler up on a curve
I'm no expert, but the skeptic's opinion I've heard would be to ask:
What evidence is there that we're not at or close to a plateau of what LLMs are capable of? How do you know the growth rate from 2023 to present will continue into 2029? eg. Is it more training data? More GPUs? What if we're kind of reaching the limits of those things already?
I think we're close to the plateau of what LLMs can do, but they will keep improving. IMHO the results are already showing diminishing returns.
The (leading) LLMs work by consensus, like Wikipedia, Openstreetmap, web search engine or opensource movement.
What I mean is if I ask LLM "create a linked list", its understanding (of what I want) is already close to the expected ideal. Just like Wikipedia article on linked list, for example.
But the LLMs will continue to improve in breath and depth of understanding the world, although technically (what they CAN do) they probably already peaked. Similarly, OSS movement technically peaked in the 90s with the creation of compiler, operating system and a database; doesn't mean that new opensource isn't being created.
I'm more curious about how much more capability they can get before the economy collapses.
Ultimately, you are describing a fundamental problem with induction -- Hume's problem of induction to be specific. How can we know that anything that has been shown empirically in the past will continue to be true - we can't. Best to investigate mechanistically:
I don't see why we would assume that we are at a plateau for RL. In many other settings, Go for instance, RL continues to scale until you reach compute limits. Some things are more easily RL'd than others, but ultimately this largely unlocks data. We are not yet compute/energy/physical world constrained. I think you would start observing clear changes in the world around you before that becomes a true bottleneck. Regardless, currently the vast majority of compute is used for inference not training so the compute overhang is large.
Assuming that we plateau at {insert current moment} seems wishful and I've already had this conversation any number of times on this exact forum at every level of capability [3.5, 4, o1, o3, 4.6/5.5, mythos] from Nov 2022 onwards.
Since we're not experts, we treat it as a black box. What are the results? Is the quality of the results improving? Is the improvement accelerating or decelerating?
And the answer appears to be that the improvement is accelerating. So how could it be stopping?
https://metr.org/time-horizons/
I don’t think improvement is accelerating. We went from “computers can’t do these things at all” to “now they can” in a few years with the discovery of transformers, and now we get “it can do the same things, except incrementally better, at a drastically higher cost” every few months.
I don’t think that the current AI paradigm has infinite headroom for improvement, similar to how every other AI approach before it eventually hit a limit.
Incrementally, higher cost? A model I'm running on a 10 year old entry level computer is better at programming than GPT4. Those are multiple orders of magnitude of improvement in a few years.
And the link I posted shows the amount of work a query can do increasing non linearly. You can explore the site for more detail and a graph that shows error rates getting halved every couple of months.
No one said anything about infinite. It doesn't mean we don't have headroom to spare.
Software itself took 80-120 years to get where it is today depending on how you count. Time is on AIs side here.
I have personally had success telling Claude that some AI-written system is too complicated and ask it to rewrite it in a more logical way. This sometimes results in thousands of lines of code being deleted. I give an instruction like that if I see certain red flags, eg:
1) same business logic implemented in two different places, with extra code to sync between them
2) fixing apparently simple bugs results in lots of new code being written
It’s a sign I need to at least temporarily dedicate more effort to overseeing work in that area.
I somewhat agree with the AI psychosis framing of the OP. It takes some taste and discipline to avoid letting things dissolve into complete slop.
It's amusing to me that:
* A belief that AI will keep getting better, presented without evidence, does not yield a lot of skepticism around these parts.
* Your comment saying it is wrong to believe AI will keep getting better, also presented without evidence, is downvoted.
> Purely AI written systems will scale to a point of complexity that no human can ever understand
I think it will be needless verbose complexity.
I kind of imagine someone having an unlimited budget of free amazon stuff shipped to their house.
In theory, they are living a prosperous life of plenty.
In reality, they will be drowning in something that isn't prosperity.
I don't understand this point of view at all. There's a symmetry that is going entirely unappreciated by most of the comments in the thread: just as I can give Claude X,000 words of text to use to describe the code I want it to write, I can also give it some existing code and ask for X,000 words of text explaining what it does. (Call it, oh, I don't know, a "spec," maybe.)
The explanation, in turn, can be fed back to recreate the functionality of the original code.
At that point, why care about the code at all? If it works, it works. If it doesn't, tell the model to fix it. You did ask for tests, right?
That is where we're indisputably headed. It's not quite a lossless loop yet, but those who say it won't or can't happen bear a heavy burden of proof.
Code is not spec. There is an implementation spectrum.
On one end, you have code that can perform only the behaviour explicitly declared in the spec, but has to be thrown away and rewritten for any new or updated spec.
On the other end, you have code that implements or anticipates a wide range of future possible specs including the given one.
The AI can operate on any point on this spectrum, but it's not very good at choosing. The more complex the software, the more such choices need to be made.
When the number of bad choices reaches a certain critical mass, even a skilled engineer becomes powerless to undo all the bad choices, and even a powerful model becomes unable to reduce it back to a coherent spec.
Code is not spec.
It is now, and vice versa. Deal with it.
following along with the amazon analogy...
Some people are mindful about what they get and don't get from amazon and don't die from prosperity. ("you might use AI to increase your prosperity")
the rest of the world eats too much and dies of heart disease/diabetes. ("the rest of the world will flounder more and AI will do more stuff to them than for them")
"Purely AI written systems will scale to a point of complexity"
You have not seen the spreadsheets that accounts run the firm on.
Bloody kids!
I've already done a handful of these gigs for early vibecoded products that had collapsed in on themselves. The scope of work was to stabilize the product and only make existing features work.
The issues have all been structural, not local. It's easier to treat it like a rewrite using the original as a super detailed product spec. Working on the existing codebase works, but you have to aggressively modularize everything anyway to untangle it rather than attack it from the top down.
All of these projects have gone well, but I haven't run into a case where a feature they thought was implemented isn't possible. That will happen eventually.
It's honestly good, quick work as a contractor. But I do hope they invest in building expertise from that point rather than treating it like a stable base to continue vibecoding on.
How do you find this type of work??
I've worked with many people over the years. A bunch of product people have struck out to make their own thing now that they can get a feedback loop going. I just keep in touch with people. They know my services are available, so if they have a need they reach out.
The greatest asset in this type of work is genuinely liking people, being good at what you do, and keeping in touch. My email is easily findable for a reason.
But it's so easy now to redo it all ground up, and if models improve, do it better next time.
I exaggerate only a little.
Pretty much. We're intensely vibe coding something that has gone through so many requirement changes. The code has become very gnarly. I took a stab at basically one prompt rewrite of the whole thing. And it wasn't there, but it was 80% of the way there. and a hell of a lot cleaner.
I'm with you on this one, having "vibe coded" some smaller internal tools on GPT 5, and then re-vibed it on Opus 4.6 and 5.5 -- they basically just fixed all of the problems without me doing much of anything other than prompting it to look at the existing code and make it "better".
How much is your budget for tokens?
As long as it's under the budget for X number of senior software devs, it seems competitive.
> Purely AI written systems will scale to a point of complexity that no human can ever understand
But won’t those more complex systems presumably solve more complex problems than the systems that humans could build? Or within a comparable time?
I think it is reasonably safe to assume at this point in the game that these AI systems are increasingly able to reason rigorously about novel problems presented to them, of ever increasing complexity and sophistication.
My company and my buddy's company, we're experiencing the same thing. We are trying to fire a SAAS vendor and it's become the hot new project. Now we to these meetings with 50 different people that are allegedly stakeholders, two or three product managers who have already vibcoded their version of something.
Ultimately, if you want to move fast, it's better just to have one engineer vibe coding something. but, that engineer is under so much pressure. Now he's got a legacy mode and another legacy mode because the requirements keep changing. And now there's a deadline in four weeks.
This all could work just fine, but the ungodly amount of attention that this world is getting puts too many cooks in the kitchen, which is always a recipe for disaster.
> reach a stable set of design principles
Are you sure about this? Yes, there is a stable set, but they are used in all of the wrong places, particularly in places where they don't belong because juniors and now AIs can recite them and want to use them everywhere. That's not even discussing whether the stable set itself is correct or not - it's dubious at this point.
As the models keep improving, wouldn’t you be able to task a newer AI to “clean up this mess”?
Yes. And as the models get better, it works better. But at one point you do have to understand the code because it's also just guessing as to what your actual intentions are.
It doesn't know what mess you want to clean up. A lot of times AI just starts making up new patterns on top of other patterns and having backwards compatibility between the two. How does it know which one you actually like?
Someone responded to a previous comment of mine [0] positing a Peter principle [1] of slopcoding — it will always be easier to tack on a new feature than to understand a whole system and clean it up. The equilibrium will remain at the point of near, but not total, codebase incomprehensibility.
[0] https://news.ycombinator.com/item?id=48037128#48038639
[1] https://en.wikipedia.org/wiki/Peter_principle
How is a newer AI going to "clean up" dropped databases, compromised computers or leaked personal data?
(None of above is theoretical)
I really am surprised that people on a heavy CS themed forum still have trouble grasping this.
Imagine the year is 1995, C exists, but some guy out there is working on essentially what modern Python is. He says to you "check out this language, you can just import stuff, and use it and dynamically modify anything at run time". You can probably come up with hundreds of arguments about things that could go wrong, like memory clean up, threading, e.t.c, but turns out, incrementally, they were all solved and we have the modern Python that basically is good enough to build these large LLM models.
Now imagine modern programming and computing is what C was back in 1995, and AI use is that guy building the Python code.
Frankly this is what everyone is counting on whether they know it or not. The question though is not “will the models get good enough?”. The question is does the repo even contain enough accurate information content to determine what the system is even supposed to be doing.
Are they improving? I thought they were just getting more expensive
Mythos apparently wrote a poem so beautiful it made Dario cry.
Roses are red
Violets are blue
AI is great
And so are you
Crocodile tears, just like the fake "fear" of its capabilities. Anything to raise another round of dumb oil money.
People are often skeptical when I say this, but there's simply no guarantee that it's possible in principle to clean up a bad architecture. If your system is "overfitted" to 10,000 requirements from 1,000 customers, it may be impossible to satisfy requirements 10,001 through 10,100 without starting over from scratch.
It may be difficult, but impossible is such a big word to use here
It's really not that big of a word. The CAP theorem shows that as few as three reasonable-sounding requirements with no obvious conflicts can be impossible to satisfy simultaneously. (User needs will start more flexible than strict mathematical requirements, of course, but once people start to build production workloads on top of your systems that flexibility is radically reduced.)
How could anyone answer that with any level of certainty?
Ai runs `rm -rf`
Beyond the Singularity, we reach the Nullarity.
https://youtu.be/m0b_D2JgZgY
What you're describing really isn't a new problem for organizations. Historically it's been a team of humans not using AI who gets over their skis and they have to have other more capable humans (also not using AI) to bail them out.
Those design principles it will take us 20 years to learn are just the principles for writing good maintainable, debug-able, understandable code today. Will just take 20 years to figure out they still apply when AI writes the code, too.
No. You can use AI to code this way. I’ve successfully steered AI to implement good architecture by moving slowly and constantly course correcting
Yes but most people won’t.
Why would it take 20 years to learn? People all around me, in an AI pilled company, have been saying this the whole time,
That sounds so horrible, though. It's akin to people working as COBOL devs because someone has to do it, so they'll get the big bucks. Except I've never heard of anyone who actually likes COBOL and the more I've learned about how mainframe development actually works, the more horrified I've become haha. Dealing with an LLM spaghetti codebase sounds like hell.
There's an LLM for that too...
https://www.hypercubic.ai/hopper
The complexity you would come to the rescue to solve, would that be from AI or from the style of programming you let the AI have? I mean, you have very different problems if you use functional style vs object-oriented. It is up to the programmer to realize they want a functional style and request that from the AI, as much as possible. Even AI cannot imagine every state transition, unless it is so smart that it should be the one telling you what to do.
> Purely AI written systems will scale to a point of complexity that no human can ever understand
In their current forms, it's unlikely for a product that actually needs to work.
It's not getting that complex and working with current LLMs.
We already know them but everyone is busy throwing them in the trash. It’s all gas and no breaks or handling right now.
> I think AI rescue consulting is going to be come a significant mode of high value consulting
I thought the same when I saw development outsourced to Indians that struggled to write a for loop.
I was wrong.
It turns out that customers will keep doubling down on mistakes until they’re out of funds, and then they’ll hire the cheapest consultants they can find to fix the mess with whatever spare change they can find under the couch cushions.
Source: being called in with a one week time budget to fix a mess built up over years and millions of dollars.
What happened after development was out sourced to Indians: developer salaries continued to rise much faster than general wages.
If you work like you're outsourcing to the worst consultancy firms, your use of AI will be ... pretty productive, actually.
I'm sure AI capabilities will plateau any moment now..
> Purely AI written systems will scale to a point of complexity that no human can ever understand and the defect close rate will taper down and the token burn per defect rate scale up and eventually AI changes will cause on average more defects than they close and the whole system will be unstable.
Wow, it’s true, AI really is set to match human performance on large, complex software systems! ;)
Humans who have been writing systems like that for many years know how to maintain and modify them successfully. It’s just that our industry has a bias towards youth who don’t think they have anything to learn from those who came before them.
How do you explain to a junior this pile of messy code isn’t crap but is actually years of integrated knowledge ? That the most common principles discussed in computer science (OOP, SOLID, DRY etc.) are actually just little guides that aren’t to be taken to the extremes ?
Here's a 26-year old post on the exact topic of messiness you raise:
https://www.joelonsoftware.com/2000/04/06/things-you-should-...
A decade ago, I was sitting in on a meeting about a rewrite and, before I could say anything, someone in the first year of her career asked why anyone thought a rewrite would be any cleaner once all the edge cases were handled. Afterwards, I asked her where she learned this. She said "I don't know, it just seems kind of obvious." She went on to be a great engineer and is now a great manager.
I work on internal facing software and every rewrite I've seen in 20 years suffers from the same symptoms. The code/system is a mess because it has been exposed to reality for a decade. Reality is messy. That's why they pay us money, believe it or not.
Greenfield guy comes in, promises the world, and starts from some first principles white papered architecture. It's really lovely until they onboard the first user. Then they slowly commit all the "sins" (features that drive revenue) of the first system.
The firm is stuck supporting N systems indefinitely because the perfect new system takes so long to cover even 30% of the original system use cases, that management takes a flier on.. bear with me.. a second rewrite. Now they have 3 systems.
I've seen more 3rd systems than I've seen actual decommissioning of original systems into a single clean new system.
The answer is chipping away, modularizing, and replacing piecemeal Ship of Theseus style. But that does not drive big hires and big promotions.
The bolded quote "It’s harder to read code than to write it." is hilarious given todays context... it has only become more true :)
It's a dice roll to keep the junior around until he unlearns the wrong bits.
Expert knows when to break the rules
Experts take the time to learn why the fence was there in the first place.
Experts are people who have made all the mistakes there are to make in their chosen field.
Including all of the above.
Experts have beginner’s mind.
tell them they need to turn a profit as quickly as possible
Wait if they can do that they’re not juniors anymore :P
> Humans who have been writing systems like that for many years know how to maintain and modify them successfully.
Do they??
Yeah... in my experience people who code like that 'successfully' make modifications that fix an immediate problem while kicking another bug or two further down the road in a never-ending sunk-cost-fallacy of job security...
Yes.
There is a lot of absurdly complex software that runs with high reliability. We hear a lot about the ones that don’t.
I believe this type of person exists.
My team lead has worked on the same software for 30 years. He has the ability to hear me discuss a bug I noticed, and then pinpoint not only the likely culprit, but the exact function that's causing it.
I do the same thing in a project I’ve worked on for 25 years. I’ve had mediocre at best results with AI. It’s useful to discuss concepts with, but the code never handles the nuances of the edge cases.
Then they quit or die.
What is your argument? Should we stop training people on how to do something because we're mortals?
Yep this is like comparing master craftsmanship with a production line. You're gonna get good attention to detail and a masterpiece from one, and a limited thing that will break after few years from the other. But for majority of use cases the second one is enough. And pointing out the master craftsmanship is "better" is besides the point.
And with one you need to train a guy for 25 years and with the other you need plan mode for a few minutes and then it runs 24/7.
Our society needs more experts, not less.
Do we? We have many buildings built and very little master masons or whatever nowadays. The amount of craftsmen needed to build a 10 story building is very limited. That's what we should aim for software, much less experts needed for the same outcome so more people can benefit from software.
I want the people building the buildings I live, work and shop in to know what they’re doing so those buildings don’t fall down or let in the wind and rain or require too much maintenance.
And the equivalent for software. It’s usable, intuitive, responsive, stats up and running, and doesn’t leak my private data.
Ok but you do want the people building your home to be experts at building homes, yes?
No house I ever lived in was ever made by experts. The apartment building I grew up in was all built by minimum wage guys that may or not even speak the language of the building overseer and had zero specific training or certifications. Some architect somewhere did the plans for a standard building, which the developer purchased and just used.
Then the only "experts" (not even close, just a guy with a form and some technical training) are the building inspectors who come at the end to verify if some stuff is done up to code.
Other than the original architect who draw the plans that got used for many buildings and the electrical engineer that cleared the electrical, no experts were involved. This is basically how the whole city and most of the country was built.
There's no expert mason or painter or whatever involved. Just a dude that can hold a paint roller. That's the same as going from a craftsman programmer to some dude with claude. Individual quality goes down, but more importantly price goes down way more and so many more people get access to much better quality than having nothing.
there is a large incentive for computer programmers to build themselves up in importance. higher wages, better love lives, more status. but most software is pretty mundane and straight forward, or at least should be. fancy architectures rarely pay off and the best solutions are sometimes the most obvious. although i could be suffering from that phenomenon that people in maths have where they struggle to understand then once they grasp it they feel dumb like "ofc i should have known that!"
It’s the old developers who have been doing it the longest who pick the simple and obvious solution.
This is sadly so true.
I have really tried as an "old" person in the field to try and pass on the stuff I've learned, but "craft" and such really has absolutely no home in modern dev culture. The people who care about history, the craft, etc. are increasingly rare.
Executive leadership bias older not younger, no?
No.
Younger implies cheaper.
it's been 10y and i still haven't seen a human system that bad
maybe some that people said were that bad. but they just needed some elbow grease. remember, it takes guts to be amazing!
[dead]
The origin of 'dark DNA' begins to make more sense through this sort of lens, except the system somehow maintained a level of compensation to fix all its flaws.
We do as well, it's called bankruptcy. Not every company survives but in the end the ones that do are more resilient.
is this true because training companies have not been training AI for both performance and brevity (or some other metric like that)? If this becomes a much more serious issue surely they would adjust the training processes
Financial auditing with pre-AI technical chops will be uniquely niche-valuable, too :)
> Somewhere in the future, the new software engineering will be primarily about principles to avoid this in the first...
It's really nowhere near as complicated as making distributed systems reliable. It's really quite simple: read a fucking book.
Well, actually read a lot of books. And write a lot of software. And read a lot of software. And do your goddamn job, engineer. Be honest about what you know, what you know you don't know, and what you urgently need to find out next.
There is no magic. Hard work is hard. If you don't like it get the fuck out of this profession and find a different one to ruin.
We all need to get a hell of a lot more hostile and unwelcoming towards these lazy assholes.
Have you watched Jurassic Park? That story is not about Dinos.
AI janitors
Not janitors. Hazmat cleanup crews.
Like this: https://en.wikipedia.org/wiki/Times_Beach%2C_Missouri
Scrape off all the soil, put it in casks, and bury it in a concrete bunker for 10000 years. Then relocate everyone and attempt to rebuild.
It's kind of like producing code is becoming more like farming.
We didn't create the dna we rely on to produce food and lumber, we just set up the conditions and hope the process produces something we want instead of deleting all the bannannas.
Farming is a fine an honorable and valuable function for society, but I have no interest in being a farmer. I build things, I don't plant seeds and pray to the gods and hope they grow into something I want.
Prayers are for weather. Pretty much all farmed plant, animal, and fungus species have been selectively bred or genetically modified. Farmers know what's going to grow.
Farming has merely a lot of study and input into the process, very little actual control and no determinism at all. We know how to improve chances is all. The fact that we breed and "engineer" is like a drop in the bucket.
It's pretty deterministic in that if you plant corn you will grow corn not beets, you know?
If the farming situation were as dire as you seem to suggest, we'd have unpredictable famines all the time, but we don't
You might grow corn, or you might grow defective unusable corn and/or any number of other things like locusts or fungi or other plants that decide to grow in the place where you planted corn. Sure, the corn seeds will not produce ball bearings. Genius observation. There are about an infinity of other things that can and do happen besides that.
Planting is merely setting up the conditions. We didn't write the dna, we couldn't write the dna if we wanted to because we are an infinity away from understanding all the actual processes that descend from the dna. And when we utilize the dna that we simply found and didn't and couln't hope to write, it's always, at best, a case of hoping it goes right again this time.
Tell me you've never done any farming without telling me you've never done any farming. There is certainly risk in the business due to market fluctuations, weather, natural disasters, disease, and pests. But the final product is highly deterministic. Almost all genetic variability has been expunged from major food production species in a relentless pursuit of predictable yield. Everything looks and tastes the same. We can debate whether that's a good thing but it is the reality for most farmers.
If it was deterministic, there would be no such thing as blights and other forms of failures. There would be no problem with the bannannas, or coffee or wine grapes. There would be no such thing as a critical few days of the entire year where if anything goes wrong you lose the entire year because it was too humid or too cold or your equipment was out of commission for a week. The bees wouldn't matter at all.
Even when it works, even if you put in a lot of work and experience and understanding, it still just worked by itself and it's just good luck every time.
You have also guessed incorrectly.
My current business plan!
This is def true but I also wonder if AI models and context sizes and capabilities will scale to keep up and eventually be able to untangle the mess.
Interesting perspective. Fundamentally at conflict with the data, science, and 20+ year trends of AI coding systems - to the point of dogmatism. But interesting from a sociological point of view.