This is so exactly right and I've been saying it to whoever will put up with me...(and now am embarrassed I have no link to show for it. oh well, shame is good for writing. envy too!)
Software production is now so easy that everything is a .emacs file (pronounced "dot emacs" btw): meaning, each individual has their own entirely personal, endlessly customizable software cocoon. As tptacek says in the OP, it's "easier to build your own solution than to install an existing one" - or to learn an existing one.
Another good analogy, not by concidence, is to Lisp in general. The classic knock against it—one I never agreed with but used to hear all the time—is that Lisp with its macros is so malleable that every programmer ends up turning it into their own private language which no one else can read.
Tangential to that was Mark Tarver's 2007 piece "The Bipolar Lisp Programmer" which had much discussion over the years (https://hn.algolia.com/?query=comments%3E0%20The%20Bipolar%2...). He wrote about the "brilliant bipolar mind" (BBM) - I won't get into how he introduces that or whether fairly or not, but it's interesting given how "AI psychosis", in both ironic and unironic variants, is frequently mentioned these days.
From Tarver's article (https://www.marktarver.com/bipolar.html):
The phrase 'throw-away design' is absolutely made for the BBM and it comes from the Lisp community. Lisp allows you to just chuck things off so easily, and it is easy to take this for granted. I saw this 10 years ago when looking for a GUI to my Lisp [...] No problem, there were 9 different offerings. The trouble was that none of the 9 were properly documented and none were bug free. Basically each person had implemented his own solution and it worked for him so that was fine. This is a BBM attitude; it works for me and I understand it. It is also the product of not needing or wanting anybody else's help to do something.
Sounds pretty 2026, no? He goes on:
The C/C++ approach is quite different. It's so damn hard to do anything with tweezers and glue that anything significant you do will be a real achievement. You want to document it. Also you're liable to need help in any C project of significant size; so you're liable to be social and work with others. You need to, just to get somewhere. And all that, from the point of view of an employer, is attractive. Ten people who communicate, document things properly and work together are preferable to one BBM hacking Lisp who can only be replaced by another BBM (if you can find one).
---
When production is so easy, consumption becomes the bottleneck [1], and suddenly sharing is a problem. This is why the Emacs analogy is so good. A .emacs file is as personal as a fingerprint. You might copy snippets into yours, but why would you ever use another person's? (other than to get started as a noob). You just make your own.
The more customized these cocoons get, the harder they are for anybody else to understand—or to want to. It isn't just that another's cocoon has too high a cognitive cost to bother learning when you can just generate you own. It's also uncomfortable, like wearing someone else's clothes. The sense of smell somehow gets involved.
I would call this, maybe not AI psychosis, but AI solipsism.
In software it's fascinating how configuration management (that boringest of all phrases) is becoming the hard part. How do you share and version the source? What even is the source? Is it the prompts? That's where the OP heads at the end: "share it somewhere — or, better yet, just a screenshot and the prompts you used to make it." But when I floated a couple trial balloons about whether we might use this for Show HN—i.e., don't just share the code you generated, because that's not the source anymore; instead share the prompts—we got a lot of pushback from knowledgeable people (summarized here: https://news.ycombinator.com/item?id=47213630).
These dynamics can only be what's behind the pipe-bursting pressure that Github has been under. What a Github successor would look like is unclear, but as a clever friend points out, there will have to be one. Projects and startups along these lines are appearing, but we seem to be in the horseless carriage phase still.
Even more importantly, what happens to teamwork? If we are all a BBM now—or rather, if we all have personal armies of BBMs, locked in a manic state, primed at all hours to generate things for us-and-only-us—how do we work together? How do cocoons communicate, interoperate? What does a team of ai solipsists look like? It sounds oxymoronic.
My sense is that a lot of software teams, startups and so on, on the cutting edge of AI-driven / agentic development, are currently contending with this, not (only) philosophically but practically, e.g. how does my generated code compose with your generated code. With these frictions we presumably end up giving back some portion (how much? who can say?) of the productivity gains of generated code. One would expect such effects to show up over time, as the systems being built this way grow in complexity and maintenance/development tradeoffs become things.
I don't see many talking about it publicly yet though, which is a pity. No one wants to be the first to stop clapping and sit down during an obligatory standing ovation, but it's a bummer if you can't (yet) tell interesting stories about downsides and instead have to pretend that this is the first free lunch, the only downsideless upside that ever existed. It makes the discussion more boring and probably slows evolution since the experiments, ironically, are happening in silos.
These are the people doing the most serious and real and advanced work with the new tools (edit: I mean in the field of software dev), so it sucks if all talk of downsides is left to the cynical/curmudgeonly contingent, who for whatever good points they may, er, generate along the way, are obviously wrong about AI having no value for software dev. It's easier to talk about AI wiping out the human race than, say, bug counts going up or productivity levelling off after a while.
Mostly I just want to know what's really going on! and how people are dealing with it and how it is developing over time. Do I have to like go to meetups or something?
[1] That's why a recent paper used the title "Easier to Write, Harder to Read" - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6726702
There's something about this whole situation that rhymes with the issue of LLM-generated prose. It's not that GPT 5.5 writes bad prose (I mean, it doesn't write good prose, but it's not awful). It's that once I pick up on the text being GPT 5.5's, my brain switches into a mode where it starts reminding me "this is just GPT output, you could just ask GPT 5.5 these questions yourself, and get answers better tailored to what you want to know". Why am I reading this one particular artifact of a conversation with the LLM? Once I know what the conversation is about, I can just have a better one myself.
Same deal with a lot of this software. I guess there's some "taste" to it, but mostly what you care about are the ideas and the "recipe".
Also, you should just do a monthly "Vibe HN" thread.
Those are great points and it leads right back to the solipsism thing. Also, you snuck a "It's not that X, it's that Y." in there. Nice.
> you should just do a monthly "Vibe HN" thread
It wouldn't stop people from feeding them into the Show HN stream, which is the problem. If we had a good enough way to tell them apart, we could factor them into two streams, but we don't yet.
> It wouldn't stop people from feeding them into the Show HN stream, which is the problem. If we had a good enough way to tell them apart, we could factor them into two streams, but we don't yet.
But it would allow for a culture to grow where the posters would self-contain their submissions into those threads.
I don't want to make this about people's faith or whatever, but to put it frankly, I've heard a lot of contemporary Christian music, I don't care for it, and [I like to think] I can reliably recognize it in three notes or fewer,¹ which may or may not bear out in rigorous testing, but saves me a lot of time either way. This feels like it parallels strongly with the topic at hand
1. erring on the side of sounding cooler
> As tptacek says in the OP, it's "easier to build your own solution than to install an existing one" - or to learn an existing one.
I can install WhatsApp in a few tens of seconds. You most definitely spent more time than that writing this comment.
Would you mind sharing a video of you building a custom WhatsApp in less time? Not even starting to think about getting other people to talk to you on your instantly-built messaging solution...
> > "easier to build your own solution than to install an existing one" - or to learn an existing one.
> I can install WhatsApp in a few tens of seconds.
But do you now have an insanely deep knowledge of WhatsApp (i.e. what serious "learning" means)?
As if someone vibe-coding it has that insanely deep knowledge?
It has been 3 hours already since your comment and I have just installed a WhatsApp update and it took around 10 seconds.
We're still waiting for tptacek's DIY WhatsApp alternative since he believes that it's "easier to build your own solution than to install an existing one".
That must be one of the most silliest comments I have ever read, and the worst part is even the moderators agree with the statement.
AI psychosis is indeed real.
To be fair, I think it is true that AI will help nerds (like me) implement their own clients. Without AI, I will think that "I could make my own client", I will spend some evenings and weekends proving that I can solve the problem, and then I will never spend the time I would need to actually make it usable.
And I would love it if more services had an Open API and allowed people to write their own clients. I like the concept of "emacsification of software".
But I find it a little extreme to say "it's faster to build your own than to install an existing alternative". You still have to spend a lot of time building your own, it's just that now it's realistic without taking a sabbatical.
Also, as someone who has developed an ever growing suite of bespoke tools for my personal workflows using Codex/Gemini CLI over the last year, something I don’t see mentioned as often is the “mental overhead” of self-designed apps.
Even if the coding process itself is “effortless” and the agent just churns away to implement whatever I ask for on a dime, it can become exhausting thinking through all my needs/wants, tradeoffs, API shape etc. Despite not needing to write a line of code myself or read more than excerpts in the chat it can turn into a slog after the honeymoon period passes and it starts to feel like an unpaid job.
Especially as complexity tends to trend upward linearly over time and what started as scripts with a couple dozen lines become 2000 line behemoths that I struggle to keep in my head purely from a functionality perspective even with documentation in README.md, AGENTS.md etc.
I’ve had moments where I’m relieved to discover a popular open source tool that works out-of-the-box as an alternative to my own so I can offload that organizational overhead and decision fatigue to someone else. While benefiting from all their features/enhancements I didn’t have to design or maintain myself over time.
As an example I had been building a TUI/web app to download and organize ebooks from various sources like Project Gutenberg or Anna’s Archive with a central meta search, and manage my personal library. It solved the immediate problem at the time but I kept needing to add missing features, plug holes in the various search integrations, UI refinements, etc and it never quite worked exactly as I wanted so kept having to work on it and become less and less fun as time went on.
Then I discovered Calibre Web Automated + Shelfmark on GitHub that did 99% of what I needed plus a lot more and overall had a level of polish and reliability my tool never reached. Now I just pull a Docker container every so often for updates and made a few tweaks to syncing but overall spend vastly more time on actually reading/organizing/growing my library vs. tedious vibe coding sessions and feels so much more enjoyable.
I think the "nerds (like me)" part of your observation is something that a lot of AI-enthusiast nerds seriously underestimate. For as long as there's been personal computing, there's been a narrative that everyone would be a programmer if we just made it easier for everyone to program, and we've seen attempt after attempt after attempt to introduce new technologies that will surely, surely, be the key to unlocking this. What we don't seem to consider is the vast circumstantial evidence that the vast majority of users are simply not interested in creating tools, automations, widgets, etc., and never will be.
For my part, I am not only a nerd, I am literally an Emacs-using nerd, and I am not interested in using LLMs to create a plethora of bespoke applications that are subtle tweaks on existing tools. I haven't ruled out using AI to assist in helping me with a program that I've been wanting to write for years, but a lot of what's blocking me on that is figuring out design aspects that an LLM wouldn't be able to help me with in the first place. (I'm also concerned about "vibe-coding" programs that I don't 100% understand, at least if they're programs that I might ever want to release into the world.)
> But I find it a little extreme to say "it's faster to build your own than to install an existing alternative".
Installing an existing alternative might be easy ... once you found the one which best (i.e. mostly) matches your requirements. The time consuming task IMO is the time needed to find and then choose between half a dozen (or so) alternatives which all might do the job ... until you installed them, tested them, and found that they are insufficient for the job you expect them to do.
> Even more importantly, what happens to teamwork?
I can concur with that thought direction. We used to pair and group-program on my team, we have a "Zoom office". Now it has become "let me take this ticket and feed it to Claude, you try the same thing with Copilot, and then we compare the results", or "I'd make a PR with my clunker, you use yours to review it". This shit honestly feels almost pointless. The pair-programming is absolutely dead. Who wants to watch me run several agents, trying to fix multiple things in different work-trees, while I'm juggling them around and fixing inconsistencies in my agents.md?
I've been pushing the idea of building a self-governing, fully autonomous cloud pipelines so we'd stop playing "stupid tokenomy" games, and it seems my management is just quietly trying to "keep it down", because I think there's a simple understanding - the moment that shit proves airworthy and actually can fly, a bunch of them are guaranteed to lose their cushy seats.
> Even more importantly, what happens to teamwork? If we are all a BBM now—or rather, if we all have personal armies of BBMs, permanently locked in a manic state, springloaded at all hours to generate things for us-and-only-us—how do we work together? How do cocoons communicate, interoperate? What does a team of ai solipsists look like? It sounds oxymoronic.
One example of teamwork is how the programmers and researchers worked together to build the UNIX SYSTEM (https://www.cs.dartmouth.edu/~doug/reader.pdf). It is not a product but an environment optimized for building tools and solving practical problems with tools written in C (while BBMs were busy with Lisp in Boston .;-)
C++ is a totally different story and you need an IDE for that.
If WASM succeeded in being the one universal ABI, it could be the perfect successor to the unix pipe for the AI age. Wasm modules for libraries, that double as terminal tools.. One could only imagine
Before WASM was the CLR.
Before the CLR was the JVM.
Before the JVM was the Smalltalk VM.
Before the Smalltalk VM was the Pascal P-Machine.
Before the Pascal P-Machine was the BCPL O-Code interpreter.
I highly agree with the pro-Lisp sentiment. The main article that comes to mind while reading this was also posted a little while back on this forum: https://isene.org/2026/05/Audience-of-One.html
Tarver's piece was new to me, and fun, and spot on. Yes, LLMs bring the emacs cruft heap to the masses. A throwaway culture on disk is a lot less worrisome than one on soil.
So cool to see a dang comment comment. Rather than moderating comment.
I wrote a little bit about my experience with this sort of stuff a little while back if you're interested:
https://news.ycombinator.com/item?id=47393437
I would add to that a few more open questions that I haven't seen addressed:
- As more engineers (and non-engineers) pick up coding agents, everyone is authenticating multiple MCPs, creating an n * n explosion of complexity that is impossible to centralise. Multiply this by the number of distinct coding agents for every platform and visibility is very tough. A lot of platforms also don't support scopes so you can't enforce safety short of a network proxy I suppose
- For non-developers mainly, lacking mental models such as <agent> for Y desktop app does not imply that there is a local LLM running on your machine. I suppose it's a question of trust and education versus starting conservative and progressively onboarding where we're more of the former.
- We talk a bit about the idea of sharing prompts but that fundamentally a prompt does not in itself contain quality. I've had internal tools I've made where it's mentioned that Claude made it when I mean, yes to a degree but I did many iterations using my own taste to refine things and held opinions about how things should operate. Giving someone a prompt won't inherently guarantee anything of quality. I often think about the idea of ie; give a screenshot of Github to an LLM but in a way, you're saying to create a clone, not of what exists today but is a dead echo of the design taste and choices made years ago that persist today. You can create things cheaply but without taste and good judgment, how can you continue to evolve it in a way that isn't like that draw the rest of the horse meme.
- I personally wonder about tokenmaxxing stories you hear about from other companies and like, logically what happens to glue roles? Does someone like a Microsoft just stack rank on token count and fire those who actually get work done? I suppose they already hollow out knowledge anyway so maybe it's nothing new.
- Definitely the thing with internal tooling where eventually you generate so much that you fundamentally have no mental model. It's fine for non-critical stuff and I'm kind of coming around to the idea that it's actually a better position to have no idea of the code and a strong "theory" of how a thing should work than it is to fully understand the code and have zero "theory". Ideally both of course.
Anyway, this isn't a comprehensive ramble but I've also been a bit disappointed that there hasn't been more talk about the second order effects. Many things can be true at once where you can see value in LLMs while still being critical of them and the whole DC situation ie; Colossus 1 etc.
> easier to build your own solution than to install an existing one
seriously?
In Emacs-land. Obviously clicking a button on the app store is easier than describing to an agent precisely the application that will solve your problem. But Emacs doesn't work this way. There's a whole subthread next to you that got all confused about this and started challenging Dan to like a WhatsApp duel or something; they've all missed this point completely.
Maybe it’s just another cocoon but I’ve been working on a framework for modular CLIs which allow different humans or agents to spin different features simultaneously but with some enforcement of shared dictionary, aliases, help, logging, formatting, semantic parsing, a few other things.
It works, it’s powerful, and certainly one way to answer the question you pose. I would argue it’s the optimal answer, it’s an answer to RPC, REST, and MCP at the same time, but it’s definitely an example of an answer and approach. In any case it is a good question and something I’ve given a lot of thought to.
Unfortunately in the age we’re in now there’s something lackluster in sharing any solution or design you have. Though the architecture and design of what I’m describing came 0% from AI everything is assumed to be and therefore unimportant? But it is the direct answer to your question so if anyone’s curious lmk.