I like coding, I really do. But like you, I like building things more than I like the way I build them. I do not find myself miss writing code by hand as much.

I do find it that the developers that focused on "build the right things" mourn less than those who focused on "build things right".

But I do worry. The main question is this - will there be a day that AI will know what are "the right things to build" and have the "agency" (or illusion of) to do it better than an AI+human (assuming AI will get faster to the "build things right" phase, which is not there yet)

My main hope is this - AI can beat a human in chess for a while now, we still play chess, people earn money from playing chess, teaching chess, chess players are still celebrated, youtube influencers still get monetized for analyzing games of celebrity chess players, even though the top human chess player will likely lose to a stockfish engine running on my iPhone. So maybe there is hope.

> will there be a day that AI will know what are "the right things to build" and have the "agency" (or illusion of) to do it better than an AI+human (assuming AI will get faster to the "build things right" phase, which is not there yet)

Of course, and if LLMs keep improving at current rates it will happen much faster than people think.

Arguably you don't need junior software engineers anymore. When you also don't need senior software engineers anymore it isn't that much of a jump to not needing project managers, managers in general or even software companies at all anymore.

Most people, in order to protect their own ego, will assume *their* job is safe until the job one rung down from them disappears and then the justified worrying will begin.

People on the "right things to build" track love to point out how bad people are at describing requirements, so assume their job as a subject matter expert and/or customer-facing liaison will be safe, but does it matter how bad people are at describing requirements if iteration is lightning fast with the human element removed?

Yes, maybe someone who needs software and who isn't historically some sort of software designer is going to have to prompt the LLM 250 times to reach what they really want, but that'll eventually still be faster than involving any humans in a single meeting or phone call. And a lot of people just won't really need software as we currently think about it at all, they'll just be passing one-off tasks to the AI.

The real question is what happens when the labor market for non-physical work completely implodes as AI eats it all. Based on current trends I'm going to predict in terms of economics and politics we handle it as poorly as possible leading to violent revolution and possible societal collapse, but I'd love to be wrong.

> I do find it that the developers that focused on "build the right things" mourn less than those who focused on "build things right".

I've always been strongly in the first category, but... the issue is that 10x more people will be able to build the right things. And if I build the right thing, it will be easy to copy. The market will get crowded, so distribution will become even harder than it is today. Success will be determined by personal brand, social media presence, social connections.

> Success will be determined by personal brand, social media presence, social connections.

Always has been. (Meme)

For me, photography is the metaphor - https://raskie.com/post/we-have-ai-at-home - We've had the technology to produce a perfect 2D likeness of a subject for close to two centuries now, and people are still painting.

Video didn't kill the radio star either. In fact the radio star has become more popular than ever in this, the era of the podcast.

While what you're saying is true, I think it is important to recognize that painting in a way that generates a livable income is mostly a marketing gig.

Likewise, being a podcaster, or "influencer" in general, is all about charisma and marketing.

So with value destruction for knowledge workers (and perhaps physical workers too once you factor in robotics) we may in fact be moving into a real "attention economy" where all value is related to being a charismatic marketer, which will be good for some people for a while, terrible for the majority, but even for the winners it seems like a limited reprieve. Historically speaking charismatic marketers can only really exist through the patronage of people who mostly aren't themselves charismatic marketers. Without patrons (who have disposable income to share) the charismatic marketers are eventually just as fucked as everyone else.

> The main question is this - will there be a day that AI will know what are "the right things to build"

What makes you think AI already isn't at the same level of quality or higher for "build the right things" as it is for "building things right"?

> will there be a day that AI will know what are "the right things to build" and have the "agency" (or illusion of) to do it better than an AI+human

I share this sentiment. It's really cool that these systems can do 80% of the work. But given what this 80% entails, I don't see a moat around that remaining 20%.

I can't explain it in a way that will make sense or base it on any data, but I do see that moat all the time. For example, Microsoft / GitHub somehow taking a 3 year lead (with Co-Pilot) and losing it to Cursor in months, and still catching up.

Microsoft / GitHub have no real limitation to doing better/faster, maybe it's the big company mentality, moving slower, fear of taking risks where you have a lot to lose, or when the personal incentive for a product manager at github is much much lower than the one of a co-founder of a seed stage startup. Co-Pilot was a microscopic line item for Microsoft as a whole, and probably marginal for GitHub too. But for Cursor, this was everything.

This is why we have innovation, if mega-corps didn't promote people to their level of incompetence, if bureaucracy and politics didn't ruin every good thing, if private equity didn't bleed every beloved product to the last penny, we would have no chance for any innovation or entrepreneurship because these company have practically close to unlimited resources.

So my only conclusion from this is - the moat is sometimes just the will to do better, to dare to say, I don't care if someone has a billion dollars to compete with me, I'll still do better.

In other words, don't underestimate the power of big companies to make colossal mistakes and build crappy products. My only worry is, that AI would not make the same mistakes an we'll basically have a handful of companies in the world (the makers of models, owner of tokens e.g. OpenAI, Anthropic, Google, Amazon, Meta, xAi), if AI led product teams will be able to not make the mistakes of modern corporations of ruining everything good that they got in their hands, then maybe software related entrepreneurship will be dead.

Computers are better at chess. Humans invented chess and enjoy it.

I think humans have the advantage.

  >  "build the right things" [vs] "build things right"
I think this (frequent) comparison is incorrect. There are times when quality doesn't matter and times that it does. Without that context these discussions are meaningless.

If I build my own table no one really gives a shit about the quality besides me and maybe my friends judging me.

But if I sell it, well then people certainly care[0] and they have every right to.

If I build my own deck at my house people do also care and there's a reason I need to get permits for this, because the danger it can cause to others. It's not a crazy thing to get your deck inspected and that's really all there is to it.

So I don't get these conversations because people are just talking past one another. Look, no one gives a fuck if you poorly vibe code your personal website, or at least it is gonna be the same level as building your own table. But if Ikea starts shipping tables with missing legs (even if it is just 1%) then I sure give a fuck and all the customers have a right to be upset.

I really think a major part of this concern with vibe coding is about something bigger. It is about slop in general. In the software industry we've been getting sloppier and sloppier and LLMs significantly amplify that. It really doesn't matter if you can vibe code something with no mistakes, what matters is what the businesses do. Let's be honest, they're rushing and don't care about quality because they have markets cornered and consumers are unable to accurately evaluate products prior to purchase. That's the textbook conditions for a lemon market. I mean the companies outsource tech support so you call and someone picks up who's accent makes you suspicious of their real name being "Steve". After all, it is the fourth "Steve" you've talked to as you get passed around from support person to support person. The same companies who contract out coders from poor countries and where you find random comments in another language. That's the way things have been going. More vaporware. More half baked products.

So yeah, when you have no cake the half baked cake is probably better than nothing. At home it also doesn't matter if you're eating a half baked cake or one that competes with the best bakers in the world. But for everyday people who can't bake their own cakes, what do they do? All they see is a box with a cake in it, one is $1, another for $10, and another other is $100. They look the same but they can't know until they take a bite. You try enough of the $1 cakes and by the time you give up the $10 cakes are all gone. By the time you get so frustrated you'll buy the $100 cake they're gone too.

I don't dislike vibe coding because it is "building things the wrong way" or any of that pretentious notion. I, and I believe most people with a similar opinion, care because "the right things" aren't being built. Most people don't care how things were built, but they sure do care about the result. Really people only start caring about how the sausage is made when they find out that something distasteful is being served and concealed from them. It's why everyone is saying "slop".

So when people make this false dichotomy it just feels like people aren't listing to what's actually being said.

[0] Mind you, it is much easier for an inexperienced person to judge the quality of a table than software. You don't need to be a carpenter to know a table's leg is missing or that it is wobbly but that doesn't always hold true for more sophisticated things like software or even cars. If you haven't guessed already, I'm referencing lemon markets: https://en.wikipedia.org/wiki/The_Market_for_Lemons

Of course. I mean, my view is that it needs to be "build the right things right", vs "build things right and then discover if they are the right things". It's a stab at premature optimisation, focusing on code elegance more than delivering working software. Code simplicity, good design, scalability, are super important for maintainability, even in the age of AI (maybe even more so).

But considering that AI will more and more "build things right" by default, it's up to us humans to decide what are the "right things to build".

Once AI knows what are the "right things to build" better than humans, this is AGI in my book, and also the end of classical capitalism as we know it. Yes, there will still be room for "human generated" market, like we have today (photography didn't kill painting, but it made it a much less of a main employment option)

In a way, AI is the great equality maker, in the past the strongest men prevailed, then when muscles were not the main means to assert force, it was the intellect, now it's just sheer want. You want to do something, now you can, you have no excuses, you just need to believe it's possible, and do it.

As someone else said, agency is eating the world. For now.

  >  it needs to be "build the right things right", vs "build things right and then discover if they are the right things"
I still think this is a bad comparison and I hoped my prior comment would handle this. Frankly, you're always going to end up in the second situation[0] simply because of 2 hard truths. 1) you're not omniscient and 2) even if you were, the environment isn't static.

  > But considering that AI will more and more "build things right" by default
And this is something I don't believe. I say a lot more here[1] but you can skip my entire comment and just read what Dijkstra has to say himself. I dislike that we often pigeonhole this LLM coding conversation into one about a deterministic vs probabilistic language. Really the reason I'm not in favor of LLMs is because I'm not in favor of natural language programming[2]. The reason I'm not in favor of natural language programming has nothing to do with its probabilistic nature and everything to do with its lack of precision[3].

I'm with Dijkstra because, like him, I believe we invented symbolic formalism for a reason. Like him, I believe that abstraction is incredibly useful and powerful, but it is about the right abstraction for the job.

[0] https://news.ycombinator.com/item?id=46911268

[1] https://news.ycombinator.com/item?id=46928421

[2] At the end of the day, that's what they are. Even if they produce code you're still treating it as a transpiler: turning natural language into code.

[3] Okay, technically it does but that's because probability has to do with this[4] and I'm trying to communicate better and most people aren't going to connect the dots (pun intended) between function mapping and probabilities. The lack of precision is inherently representable through the language of probability but most people aren't familiar with terms like "image" and "pre-image" nor "push-forward" and "pull-back". The pedantic nature of this note is precisely illustrative of my point.

[4] https://www.mathsisfun.com/sets/injective-surjective-bijecti...