This mirrors insights from Andrew Ng's recent AI startup talk [1].
I recall he mentions in this video that the new advice they are giving to founders is to throw away prototypes when they pivot instead of building onto a core foundation. This is because of the effects described in the article.
He also gives some provisional numbers (see the section "Rapid Prototyping and Engineering" and slides ~10:30) where he suggests prototype development sees a 10x boost compared to a 30-50% improvement for existing production codebases.
This feels vaguely analogous to the switch from "pets" to "livestock" when the industry switched from VMs to containers. Except, the new view is that your codebase is more like livestock and less like a pet. If true (and no doubt this will be a contentious topic to programmers who are excellent "pet" owners) then there may be some advantage in this new coding agent world to getting in on the ground floor and adopting practices that make LLMs productive.
IMO the problem with this pets vs. livestock analogy is that it focuses on the code when the value is really in the writers head. Their understanding and mental model of the code is what matters. AI tools can help with managing the code, helping the writer build their models and express their thoughts, but it has zero impact on where the true value is located.
Great point, but just mentioning (nitpicking?) that I never heard about machines/containers referred to as "livestock", but rather in my milieu it's always "pets" vs "cattle". I now wonder if it's a geographical thing.
Yeah, the CERN talk* [0] coined the term Pets vs. Cattle analogy, and it was way before VMs were cheap on bare metal. I think the word just evolved as the idea got rooted in the community.
We use the same analogy for the last 20 years or so. Provisioning 150 cattle servers take 15 minutes or so, and we can provision a pet in a couple of hours, at most.
[0]: https://www.engineyard.com/blog/pets-vs-cattle/
*: Engine Yard post notes that Microsoft's Bill Baker used the term earlier, though CERN's date (2012) checks out with our effort timeline and how we got started.
First time I heard it was from Adrian Cockcroft in... I think 2012, he def was talking about it a lot in 2013/2014, looks like he got it from Bill. https://se-radio.net/2014/12/episode-216-adrian-cockcroft-on...
Randy Bias also claims authorship https://cloudscaling.com/blog/cloud-computing/the-history-of...
this tweet by Tim Bell seems to indicate shared credit with Bill Baker and Randy Bias
https://x.com/noggin143/status/354666097691205633
@randybias @dberkholz CERN's presentation of pets and cattle was derived from Randy's (and Bill Baker's previously).
I didn't mean to dispute who said it first, but wanted to say that we took the terms from CERN, and we got them around the time of their talk.
Boxen? (Oxen)
AFAIK, Boxen is a permutation of Boxes, not Oxen.
There seems to be a pattern of humorous plurals in English where by analogy with ox ~ oxen you get -x ~ -xen: boxen, Unixen, VAXen.
Before you call this pattern silly, consider that the fairly normal plural “Unices” is by analogy with Latin plurals in -x = -c|s ~ -c|ēs, where I’ve expanded -x into -cs to make it clear that the Latin singular comprises a noun stem ending in -c- and a (nominative) singular ending -s, which does exist in Latin but is otherwise completely nonexistent in English. (This is extra funny for Unix < Unics < Multics.) Analogies are the order of the day in this language.
Yeah. After reading your comment, I thought "maybe the Xen hypervisor is named because of this phenomena". "xen" just means "many" in that context.
Also, probably because of approaching graybeard territory, Thinking about boxen of VAXen running UNIXen makes me feel warm and fuzzy. :D
Thanks for pointing this out. I think this is an insightful analogy. We will likely manage generated code in the same way we manage large cloud computing complexes.
This probably does not apply to legacy code that has been in use for several years where the production deployment gives you a higher level of confidence (and a higher risk of regression errors with changes).
Have you blogged about your insights, the https://stillpointlab.com site is very sparse as is @stillpointlab
I'm currently in build mode. In some sense, my project is the most over complicated blog engine in the history of personal blog engines. I'm literally working on integrating a markdown editor to the project.
Once I have the MVP working, I will be working on publishing as a means to dogfood the tool. So, check back soon!
Is there a mailing list I can sign up for to be notified. The check back soon protocol reminds me of my youth.
Mailing list is on the roadmap but doesn't exist just yet.
What you could do: sign in using one of the OAuth methods, go to the user page and then go to the feedback section. Let me know in a message your email and I'll ping you once the blog is setup.
Sorry it is primitive at this stage but I'm prioritizing MVP before marketing.
Oo, the "pets vs. livestock" analogy really works better than the "craftsmen vs. slop-slinger" arguments.
Because using an LLM doesn't mean you devalue well-crafted or understandable results. But it does indicate a significant shift in how you view the code itself. It is more about the emotional attachment to code vs. code as a means to an end.
I don't think it's exactly emotional attachment. It's the likelihood that I'm going to get an escalated support ticket caused by this particular piece of slop/artisanally-crafted functionality.
Not to slip too far into analogy, but that argument feels a bit like a horse-drawn carriage operator saying he can't wait to pick up all of the stranded car operators when their mechanical contraptions break down on the side of the road. But what happened instead was the creation of a brand new job: the mechanic.
I don't have a crystal ball and I can't predict the actual future. But I can see the list of potential futures and I can assign likelihoods to them. And among the potential futures is one where the need for humans to fix the problems created by poor AI coding agents dwindles as the industry completely reshapes itself.
Both can be true. There were probably a significant number of stranded motorists that were rescued by horse-powered conveyance. And eventually cars got more convenient and reliable.
I just wouldn't want to be responsible for servicing a guarantee about the reliability of early cars.
And I'll feel no sense of vindication if I do get that support case. I will probably just sigh and feel a little more tired.
Yes, the whole point that it is true. But only for a short window.
So consider differing perspectives. Like a teenage kid that is hanging around the stables, listening to the veteran coachmen laugh about the new loud, smoky machines. Proudly declaring how they'll be the ones mopping up the mess, picking up the stragglers, cashing it in.
The career advice you give to the kid may be different than the advice you'd give to the coachman. That is the context of my post: Andrew Ng isn't giving you advice, he is giving advice to people at the AI school who hope to be the founders of tomorrow.
And you are probably mistaken if you think the solution to the problems that arise due to LLMs will result in those kids looking at the past. Just like the ultimate solution to car reliability wasn't a return to horses but rather the invention of mechanics, the solution to problems caused by AI may not be the return to some software engineering past that the old veterans still hold dear.
I don't know about what's economically viable, but I like writing code. It might go away or diminish as a viable profession, which might make me a little sad. There are still horse enthusiasts who do horse things for fun.
Things change, and that's ok. I guess I just got lucky so far that this thing I like doing just so happens to align with a valuable skill.
I'm not arguing for or against anything, but I'll miss it if it goes away.
In my world that isn't inherently a bad thing. Granted, I belong to the YAGNI crowd of software engineers who put business before tech architecture. I should probably mention that I don't think this means you should skip on safety and quality where necessariy, but I do preach that the point of software is to serve the business as fast as possible. I do this to the extend where I actually think that our BI people who are most certainly not capable programmers are good at building programs. They mostly need oversight on external dependencies, but it's actually amazing what they can produce in a very short amount of time.
Obviously their software sucks, and eventually parts of it always escalates into a support ticket which reaches my colleagues and me. It's almost always some form of performance issue, this is in part because we have monthly sessions where they can bring issues they simply can't get to work to us. Anyway, I see that as a good thing. It means their software is serving the business and now we need to deal with the issues to make it work even better. Sometimes that is because their code is shit, most times it's because they've reached an actual bottleneck and we need to replace part of their Python with a C/Zig library.
The important part of this is that many of these bottlenecks appear in areas that many software enginering teams that I have known wouldn't necessarily have predicted. Mean while a lot of the areas that traditional "best practices" call for better software architecture for, work fine for entire software lifecycles being absolutely horrible AI slop.
I think that is where the emotional attachment is meant to fit in. Being fine with all the slop that never actually matters during a piece of softwares lifecycle.