This March 2025 post from Aral Balkan stuck with me:

https://mastodon.ar.al/@aral/114160190826192080

"Coding is like taking a lump of clay and slowly working it into the thing you want it to become. It is this process, and your intimacy with the medium and the materials you’re shaping, that teaches you about what you’re making – its qualities, tolerances, and limits – even as you make it. You know the least about what you’re making the moment before you actually start making it. That’s when you think you know what you want to make. The process, which is an iterative one, is what leads you towards understanding what you actually want to make, whether you were aware of it or not at the beginning. Design is not merely about solving problems; it’s about discovering what the right problem to solve is and then solving it. Too often we fail not because we didn’t solve a problem well but because we solved the wrong problem.

When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make. Being handed a baked and glazed artefact that approximates what you thought you wanted to make removes the very human element of discovery and learning that’s at the heart of any authentic practice of creation. Where you know everything about the thing you shaped into being from when it was just a lump of clay, you know nothing about the image of the thing you received for your penny from the vending machine."

And when programming with agentic tools, you need to actively push for the idea to not regress to the most obvious/average version. The amount of effort you need to expend on pushing the idea that deviates from the 'norm' (because it's novel), is actually comparable to the effort it takes to type something out by hand. Just two completely different types of effort.

There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.

You just described the burden of outsourcing programming.

Outsourcing development and vibe coding are incredibly similar processes.

If you just chuck ideas at the external coding team/tool you often get rubbish back.

If you're good at managing the requirements and defining things well you can achieve very good things with much less cost.

With the basic and enormous difference that the feedback loop is 100 or even 1000x faster. Which changes the type of game completely, although other issues will probably arise as we try this new path.

That embeds an assumption that the outsourced human workers are incapable of thought, and experience/create zero feedback loops of their own.

Frustrated rants about deliverables aside, I don't think that's the case.

No. It just means the harsh reality: what's really soul crushing in outsourced work is having endless meetings to pass down / get back information, having to wait days/weeks/months to get some "deliverable" back on which iterate etc. Yes, outsourced human workers are totally capable of creative thinking that makes sense, but their incentive will always be throughput over quality, since their bosses usually give closed prices (at least in what I lived personally).

If you are outsourcing to an LLM in this case YOU are still in charge of the creative thought. You can just judge the output and tune the prompts or go deep in more technical details and tradeoffs. You are "just" not writing the actual code anymore, because another layer of abstraction has been added.

Also, with an LLM you can tell it to throw away everything and start over whenever you want.

When you do this with an outsourced team, it can happen at most once per sprint, and with significant pushback, because there's a desire for them to get paid for their deliverable even if it's not what you wanted or suffers some other fundamental flaw.

Yep, just these past two weeks. I tried to reuse an implementation I had used for another project, it took me a day to modify it (with Codex), I tried it out and it worked fine with a few hundred documents.

Then I tried to push through 50000 documents, it crashed and burned like I suspected. It took one day to go from my second more complicated but more scalable spec where I didn’t depend on an AWS managed service to working scalable code.

It would have taken me at least a week to do it myself

It doesn't have to be soul crushing.

Just like people more, and have better meetings.

Life is what you make it.

Enjoy yourself while you can.

It's not strictly soul-crushing for me, but I definitely don't like to waste time in non-productive meetings where everyone bullshits everyone else. Do you like that? Do you find it a good use of your time and brain attention capacity?

Just have better meetings

If we could I think we would be doing that...

It's going to come across very naive and dumb, but I believe we can and people just aren't aware of or they simply aren't implementing the basics.

Harvard Business Review and probably hundreds of other online content providers provide some simple rules for meetings yet people don't even do these.

1. Have a purpose / objective for the meeting. I consider meetings to fall into one of three broad categories information distribution, problem solving, decision making. Knowing this will allow the meeting to go a lot smoother or even be moved to something like an email and be done with it.

2. Have an agenda for the meeting. Put the agenda in the meeting invite.

3. If there are any pieces of pre-reading or related material to be reviewed, attach it and call it out in the invite. (But it's very difficult to get people to spend the time preparing for a meeting.)

4. Take notes during the meeting and identify any action items and who will do them (preferably with an initial estimate). Review these action items and people responsible in the last couple of minutes of the meeting.

5. Send out the notes and action items.

Why aren't we doing these things? I don't know, but I think if everyone followed these for meetings of 3+ people, we'd probably see better meetings.

Probably like most businesses issues, it's a people problem. They have to care in the first place and idk if you can make people who don't care starting caring.

I agree the info is out there about how to run effective meetings.

I think there's a certain kind of irony in being asked externally to enjoy the rubbish I've been given to eat. It's still rubbish.

Not really, its just obviously true that the communication cycle with your terminal/LLM is faster than with a human over Slack/email.

100%! There is significant analogy between the two!

There is a reason management types are drawn to it like flies to shit.

Working with and communicating with offshored teams is a specific skill too.

There are tips and tricks on how to manage them and not knowing them will bite you later on. Like the basic thing of never asking yes or no questions, because in some cultures saying "no" isn't a thing. They'll rather just default to yes and effectively lie than admit failure.

[deleted]

YES!

AI assistance in programming is a service, not a tool. You are commissioning Anthropic, OpenAI, etc. to write the program for you.

We need a new word for on-premise offshoring.

On-shoring ;

> On-shoring

I thought "on-shoring" is already commonly used for the process that undos off-shoring.

How about "in-shoring"? We already have "insuring" and "ensuring", so we might as well add another confusingly similar sounding term to our vocabulary.

How about we leave "...shoring" alone?

[deleted]

En-shoring?

Corporate has been using the term "best-shoring" for a couple of years now. To my best guess, it means "off-shoring or on-shoring, whichever of the two is cheaper".

Rubber-duckying... although a rubber ducky can't write code... infinite-monkeying?

In silico duckying

NIH-shoring?

[deleted]

Ai-shoring.

Tech-shoring.

Would work, but with "snoring". :D

vibe-shoring

eshoring

We already have a perfect one

Slop;

Fair enough but I am a programmer because I like programming. If I wanted to be a product manager I could have made that transition with or without LLMs.

Agreed. The higher-ups at my company are, like most places, breathlessly talking about how AI has changed the profession - how we no longer need to code, but merely describe the desired outcome. They say this as though it’s a good thing.

They’re destroying the only thing I like about my job - figuring problems out. I have a fundamental impedance mismatch with my company’s desires, because if someone hands me a weird problem, I will happily spend all day or longer on that problem. Think, hypothesize, test, iterate. When I’m done, I write it up in great detail so others can learn. Generally, this is well-received by the engineer who handed the problem to me, but I suspect it’s mostly because I solved their problem, not because they enjoyed reading the accompanying document.

FWIW, when a problem truly is weird, AI & vibe coding tends to not be able to solve it. Maybe you can use AI to help you spend more time working on the weird problems.

When I play sudoku with an app, I like to turn on auto-fill numbers, and auto-erase numbers, and highlighting of the current number. This is so that I can go directly to the crux of the puzzle and work on that. It helps me practice working on the hard part without having to slog through the stuff I know how to do, and generally speaking it helps me do harder puzzles than I was doing before. BTW, I’ve only found one good app so far that does this really well.

With AI it’s easier to see there are a lot of problems that I don’t know how to solve, but others do. The question is whether it’s wasteful to spend time independently solving that problem. Personally I think it’s good for me to do it, and bad for my employer (at least in the short term). But I can completely understand the desire for higher-ups to get rid of 90% of wheel re-invention, and I do think many programmers spend a lot of time doing exactly that; independently solving problems that have already been solved.

You touch on an aspect of AI-driven development that I don't think enough people realize: choosing to use AI isn't all or nothing.

The hard problems should be solved with our own brains, and it behooves us to take that route so we can not only benefit from the learnings, but assemble something novel so the business can differentiate itself better in the market.

For all the other tedium, AI seems perfectly acceptable to use.

Where the sticking point comes in is when CEOs, product teams, or engineering leadership put too much pressure on using AI for "everything", in that all solutions to a problem should be AI-first, even if it isn't appropriate—because velocity is too often prioritized over innovation.

> choosing to use AI isn't all or nothing.

That's how I have been using AI the entire time. I do not use Claude Code or Codex. I just use AI to ask questions instead of parsing the increasingly poor Google search results.

I just use the chat options in the web applications with manual copy/pasting back and forth if/when necessary. It's been wonderful because I feel quite productive, and I do not really have much of an AI dependency. I am still doing all of my work, but I can get a quicker answer to simple questions than parsing through a handful of outdated blogs and StackOverflow answers.

If I have learned one thing about programming computers in my career, it is that not all documentation (even official documentation) was created equally.

Though it is not like management roles have ever appreciated the creative aspects of the job, including problem solving. Management has always wished to just describe the desired outcome and get magic back. They don't like acknowledging that problems and complications exist in the first place. Management likes to think that they are the true creatives for company vision and don't like software developers finding solutions bottom up. Management likes to have a single "architect" and maybe a single "designer" for the creative side that they like and are a "rising" political force (in either the Peter Principle or Gervais Principle senses) rather than deal with a committee of creative people. It's easier for them to pretend software developers are blue collar cogs in the system rather than white collar problem solvers with complex creative specialties. LLMs are only accelerating those mechanics and beliefs.

Agreed. I hate to say it, but if anyone thought this train of thought in management was bad now, it's going to get much worse, and unfortunately burnout is going to sweep the industry as tech workers feel evermore underappreciated and invisible to their leaders.

And worse: with few opportunities to grow their skills from rigorous thinking as this blog post describes. Tech workers will be relegated to cleaning up after sloppy AI codebases.

I greatly agree with that deep cynicism and I too am a cynic. I've spent a lot of my career in the legacy code mines. I've spent a lot of my career trying to climb my way out of them or at least find nicer, more lucrative mines. LLMs are the "gift" of legacy-code-as-a-service. They only magnify and amplify the worst parts of my career. The way the "activist shareholder" class like to over-hype and believe in Generative AI magic today only implies things have more room to keep getting worse before they get better (if they ever get better again).

I'm trying my best to adapt to being a "centaur" in this world. (In Chess it has become statistically evident that Human and Bot players of Chess are generally "worse" than the hybrid "Centaur" players.) But even "centaurs" are going to be increasingly taken for granted by companies, and at least for me the sense is growing that as WOPR declared about tic-tac-toe (and thermo-nuclear warfare) "a curious game, the only way to win is not to play". I don't know how I'd bootstrap an entirely new career at this point in my life, but I keep feeling like I need to try to figure that out. I don't want to just be a janitor of other people's messes for the rest of my life.

They’re destroying the only thing I like about my job - figuring problems out.

So, tackle other problems. You can now do things you couldn't even have contemplated before. You've been handed a near-godlike power, and all you can do is complain about it?

> You can now do things you couldn't even have contemplated before. You've been handed a near-godlike power, and all you can do is complain about it?

This seems to be a common narrative, but TBH I don't really see it. Where is all the amazing output from this godlike power? It certainly doesn't seem like tech is suddenly improving at a faster pace. If anything, it seems to be regressing in a lot of cases.

I’m a programmer (well half my job) because I was a short (still short) fat (I got better) kid with a computer in the 80s.

Now, the only reason I code and have been since the week I graduated from college was to support my insatiable addictions to food and shelter.

While I like seeing my ideas come to fruition, over the last decade my ideas were a lot larger than I could reasonably do over 40 hours without having other people working on projects I lead. Until the last year and a half where I could do it myself using LLMs.

Seeing my carefully designed spec that includes all of the cloud architecture get done in a couple of days - with my hands on the wheel - that would have taken at least a week with me doing some work while juggling dealing with a couple of other people - is life changing

Not sure why this is getting downvoted, but you're right — being able to crank out ideas on our own is the "killer app" of AI so to speak.

Granted, you would learn a lot more if you had pieced your ideas together manually, but it all depends on your own priorities. The difference is, you're not stuck cleaning up after someone else's bad AI code. That's the side to the AI coin that I think a lot of tech workers are struggling with, eventually leading to rampant burnout.

What would I learn that I don’t already know? The exact syntax and property of Terraform and boto3 for every single one of the 150+ services that AWS offers? How to modify a React based front end written by another developer even though I haven’t and have actively stayed away from front end development for well over a decade?

Will a company pay me more for knowing those details? Will I be more affectively able to architect and design solutions that a company will pay my employer to contract me to do and my company pays me? They pay me decently not because I “codez real gud”. They pay me because I can go from empty AWS account, empty repo and ambiguous customer requirements to a working solution (after spending time talking to a customer) to a full well thought out architecture + code on time on budget and that meets requirements.

I am not bragging, I’m old those are table stakes to being able to stay in this game for 3 decades

I became an auto mechanic because I love machining heads, and dropping oil pans to inspect, and fitting crankshafts in just right, and checking fuel filters, and adjusting alternators.

If I wanted to work on electric power systems I would have become an electrician.

(The transition is happening.)

I can't help but imagine training horses vs training cats. One of them is rewarding, a pleasure, beautiful to see, the other is frustrating, leaves you with a lot of scratches and ultimately both of you "agreeing" on a marginal compromise.

Right now vibe coding is more like training cats. You are constantly pushing against the model's tendency to produce its default outputs regardless of your directions. When those default outputs are what you want - which they are in many simple cases of effectively English-to-code translation with memorized lookup - it's great. When they are not, you might as well write the code yourself and at least be able to understand the code you've generated.

Yup - I've related it to working with Juniors, often smart and have good understandings and "book knowledge" of many of the languages and tools involved, but you often have to step back and correct things regularly - normally around local details and project specifics. But then the "junior" you work with every day changes, so you have to start again from scratch.

I think there needs to be a sea change in the current LLM tech to make that no longer the case - either massively increased context sizes, so they can contain near a career worth of learning (without the tendency to start ignoring that context, as the larger end of the current still-way-too-small-for-this context windows available today), or even allow continuous training passes to allow direct integration of these "learnings" into the weights themselves - which might be theoretically possible today, but is many orders of magnitude higher in compute requirements than available today even if you ignore cost.

Try writing more documentation. If your project is bigger than a one man team then you need it anyways and with LLM coding you effectively have an infinite man team.

I've never seen horse that scratches you.

This is why people thinkless of artists like Damien Hirst and Jeff Koons because their hands have never once touched the art. They have no connection to the effort. To the process. To the trail and error. To the suffer. They’ve out sourced it, monetized it, and make it as efficient as possible. It’s also soulless.

To me it feels a bit like literate programming, it forces you to form a much more accurate idea of your project before your start. Not a bad thing, but can be wasteful also when eventually you realise after the fact that the idea was actually not that good :)

Yeah, it's why I don't like trying to write up a comprehensive design before coding in the first place. You don't know what you've gotten wrong until the rubber meets the road. I try to get a prototype/v1 of whatever I'm working on going as soon as possible, so I can root out those problems as early as possible. And of course, that's on top of the "you don't really know what you're building until you start building it" problem.

> need to make it crystal clear

That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.

And programming languages are designed for clarifying the implementation details of abstract processes; while human language is this undocumented, half grandfathered in, half adversarially designed instrument for making apes get along (as in, move in the same general direction) without excessive stench.

The humane and the machinic need to meet halfway - any computing endeavor involves not only specifying something clearly enough for a computer to execute it, but also communicating to humans how to benefit from the process thus specified. And that's the proper domain not only of software engineering, but the set of related disciplines (such as the various non-coding roles you'd have in a project team - if you have any luck, that is).

But considering the incentive misalignments which easily come to dominate in this space even when multiple supposedly conscious humans are ostensibly keeping their eyes on the ball, no matter how good the language machines get at doing the job of any of those roles, I will still intuitively mistrust them exactly as I mistrust any human or organization with responsibly wielding the kind of pre-LLM power required for coordinating humans well enough to produce industrial-scale LLMs in the first place.

What's said upthread about the wordbox continually trying to revert you to the mean as you're trying to prod it with the cowtool of English into outputting something novel, rings very true to me. It's not an LLM-specific selection pressure, but one that LLMs are very likely to have 10x-1000xed as the culmination of a multigenerational gambit of sorts; one whose outset I'd place with the ever-improving immersive simulations that got the GPU supply chain going.

[deleted]

I think harder while using agents, just not about the same things. Just because we all got a super powers doesn't make the problems go away, they just move and we still have our full brains to solve them.

It isn't all great, skills that feel important have already started atrophying, but other skills have been strengthened. The hardest part is in being able to pace onself as well as figuring out how to start cracking certain problems.

Uniqueness is not the aim. Who cares if something is uniquely bad? But in any case, yes, if you use LLMs uncritically, as a substitute for reasoning, then you obviously aren't doing any reasoning and your brain will atrophy.

But it is also true that most programming tedious and hardly enriching for the mind. In those cases, LLMs can be a benefit. When you have identified the pattern or principle behind a tedious change, an LLM can work like a junior assistant, allowing you to focus on the essentials. You still need to issue detailed and clear instructions, you still need to verify the work.

Of course, the utility of LLMs is a signal that either the industry is bad at abstracting, or that there's some practical limit.

Yet another example of "comments that are only sort of true because high temperature sampling isn't allowed".

If you use LLMs at very high temperature with samplers which correctly keep your writing coherent (i.e. Min_p, or better like top-h, P-less decoding, etc), than "regression to the mean" literally DOES NOT HAPPEN!!!!

Have you actually tried high temperature values for coding? Because I don’t think it’s going to do what you claim it will.

LLMs don’t “reason” the same way humans do. They follow text predictions based on statistical relevance. So raising the temperature will more likely increase the likelihood of unexecutable pseudocode than it would create a valid but more esoteric implementation of a problem.

To put it another way, a high-temperature mad-libs machine will write a very unusual story, but that isn't necessarily the same as a clever story.

So why is this "temperature" not on, like, a rotary encoder?

So you can just, like, tweak it when it's working against your intent in either direction?

AFAIK there's no algorithmic reason against it, but services might not expose the controls in a convenient way, or at all.

High temperature seems fine for my coding uses on GPT5.2.

Code that fails to execute or compile is the default expectation for me. That's why we feed compile and runtime errors back into the model after it proposes something each time.

I'd much rather the code sometimes not work than to get stuck in infinite tool calling loops.

How do you configure LLM température in coding agents, e.g. opencode?

https://opencode.ai/docs/agents/#temperature

set it in your opencode.json

Note when I said "you have to hack it in", I mean you'll need to hack in support for modern LLM samplers like min_p, which enables setting temperature up to infinity (given min_p approaching 1) while maintaining coherence.

You can't without hacking it! That's my point! The only places you can easily are via the API directly, or "coomer" frontends like SillyTavern, Oobabooga, etc.

Same problem with image generation (lack of support for different SDE solvers, the image version of LLM sampling) but they have different "coomer" tools, i.e. ComfyUI or Automatic1111

Once again, porn is where the innovation is…

Please.. "Creative Writing"

To me it's all abstraction. I didn't write my own OS. I didn't write my own compiler. I didn't write the standard library. I just use them. I could write them but I'm happy to work on the new thing that uses what's already there.

This is no different than many things. I could grow a tree and cut it into wood but I don't. I could buy wood and nails and brackets and make furniture but I don't. I instead just fill my house/apartment with stuff already made and still feel like it's mine. I made it. I decided what's in it. I didn't have to make it all from scratch.

For me, lots of programming is the same. I just want to assemble the pieces

> When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make

No, your favorite movie is not crap because the creators didn't grind their own lens. Popular and highly acclaimed games not at crap because they didn't write their own physics engine (Zelda uses Havok) or their own game engine (Plenty of great games use Unreal or Unity)

When I read discussions about this sort of thing, I often find that folks look harder for similarities and patterns but once they succeed here, they ignore the differences. AI in particular is so full of this "pattern matching" style of thinking that the really significance of this tech, ie., how absolutely new and different it is, yeah it just sort of goes ignored, or even worse, machines get "pattern matched" into humans and folks argue from that point of view lol witness all the "new musicians" who vibe code disco hits, I'll invariably see the argument that AIs train on existing music just like humans do so whats the big deal?

But these arguments and the OP's article do reinforce that AI rots brains. Even my sparing use of googles gemini and my interaction with the bots here have really dinged my ability to do simple math.

OS and compilers have a deterministic public interface. They obey a specification developers know, so you they can be relied on to write correct software that depends on them even without knowing the internal behavior. Generative AI does not have those properties.

Yes but developers don’t have a deterministic interface. I still had to be careful about writing out my specs and make sure they were followed. At least I don’t have to watch my tone when my two mid level ticket taking developers - Claude and Codex - do something stupid. They also do it a lot faster

But the code you’re writing is guard railed by your oversight, the tests you decide on and the type checking.

So whether you’re writing the spec code out by hand or ask an LLM to do it is besides the point if the code is considered a means to an end, which is what the post above yours was getting at.

Tests and type checking are often highway-wide guardrails when the path you want to take is like a tightrope.

Also the code is not a means to an end. It’s going to be run somewhere doing stuff someone wants to do reliably and precisely. The overall goal was ever to invest some programmer time and salary in order to free more time for others. Not for everyone to start babysitting stuff.

> They obey a specification developers know

Which spec? Is there a spec that says if you use a particular set of libraries you’d get less than 10 millisecond response? You can’t even know that for sure if you roll your own code, with no 3rd party libraries.

Bugs are by definition issues arise when developers expect they code to do one thing, but it does another thing, because of unforeseen combination of factors. Yet we all are ok with that. That’s why we accept AI code. They work well enough.

> Is there a spec that says if you use a particular set of libraries you’d get less than 10 millisecond response?

There can be. But you’d have to map the libraries to opcodes and then count the cycles. That’s what people do when they care about that particular optimization. They measure and make guaranties.

That’s not realistic with any processor that does branch prediction, cache hits vs cache misses etc

You can easily compute the worst cases. All the details are in the specs of the processor.

Assuming also that you are not running on top of an operating system, running in a VM with “noisy neighbors”…

I haven’t counted cycles since programming assembly on a 65C02 where you cooks save a clock cycle by accessing memory in the first page of memory - two opcodes to do LDA $02 instead of LDA $0201

Then assumes the opposite. Build an RTOS and don’t virtualize your software on top of it.

> I didn't write my own OS. I didn't write my own compiler. I didn't write the standard library. I just use them. I could write them

Maybe, but beware assuming you could do something you haven't actually tried to do.

Everything is easy in the abstract.

> No, your favorite movie is not crap because the creators didn't grind their own lens.

But Pulp Fiction would not have been a masterpiece if Tarantino just typed “Write a gangster movie.” into a prompt field.

> But Pulp Fiction would not have been a masterpiece if Tarantino just typed “Write a gangster movie.” into a prompt field.

Doesn’t that prove the point? You could do that right now, and it would be absolute trash. Just like how right now we are nowhere close to being able to make great software with a single prompt.

I’ve been vibecoding a side project and it has been three months of ideating, iterating, refining and testing. It would have taken me immeasurably longer without these tools, but the end result is still 100% my vision, and it has been a tremendous amount of work.

Pulp Fiction, like many Tarantino movies, also gets much of its effect from using existing songs rather than using an all new soundtrack

Songs he likely hand picked and for reasons that even you and I don't know about, instead of songs suggested by an AI with no personal taste.

More to your original point, Tarantino is actually well known for his deliberate uses of rare lenses. He doesn't grind them himself, but he did resurrect a dead lens format for The Hateful Eight:

https://en.wikipedia.org/wiki/Ultra_Panavision_70

And if he did, why would I prefer using his prompt instead of mine?

"Write a gangster movie that I like", instead of "...a movie this other guy likes".

But because this is not the case, we appreciate Tarantino more than we appreciate gangster movies. It is about the process.

This is exactly the process happening in the music space with Suno. Go to their subreddit, they all talk about how they only listen to ‘their’ songs, for the exact reasons you list.

Its bleak out there.

It is very different with music. Music and images fall into "just shit something and I don't care what is is" category. Most people prompting for things in this category will be satisfied with anything, they might not admit, but the degrees of freedom the model has is infinite. Now when you pin the output, let's say a character you generated, and ask for modifications WHILE KEEPING lots of characteristics, you reduce the degrees of freedom from infinite to a small, very constrained, set of states. There are workarounds but natively llms can't really do this. You ask the model to rotate an image, the hair becomes blue and the sword becames an axe.

With music this is much more pronounced because most people are musically illiterate, so even the basic mistakes while dragging characteristics over diffs becomes invisible. It's an interesting phenomenon I agree, but it says more about lack of taste and illiteracy of the common individual.

But on the point of "thinking hard", with music and artistic production in general, individuals (human with soul, not npc) crave for ideas and perspective. It is the play, the relationship between ideas that are hard to vocalize and describe but can be provocative. Because we cannot describe or understand, we have no choice other than provoke into another a similar contemplation.

But make no mistake, nobody is enjoying llm slop. They have fantasies that now they can produce something of value, or delegate this production. If this becomes true, instantly they lose and everyone goes directly to the source.

Art is specifically about communicating the inconceivable, cannot be delegated. If the tool is sufficient to produce art, then the expression is of the tool itself, now they ARE.

> But because this is not the case, we appreciate Tarantino more than we appreciate gangster movies.

Do we? I don't think people appreciate tarantino more than gangster movies. Don't think people appreciate tarantino more than pulp fiction. Frankly, tarantino doesn't factor in at all.

> It is about the process.

I never considered the process when watching pulp fiction. It's the finished product, not the process, that matters.

Put it this way, we know who tarantino is because of pulp fiction. Not the other way around.

> It's the finished product, not the process, that matters.

I think the point is that the finished product depends on the process.

The creative process is not dependent on the abstraction.

> For me, lots of programming is the same. I just want to assemble the pieces

How did those pieces came to be? By someone assembling other pieces or by someone crafting them together out of nowhere because nobody else had written them by the time?

Of course you reuse other parts and abstractions to do whatever things that you're not working on but each time you do something that hasn't been done before you can't but engage the creative process, even if you're sitting on top of 50 years worth of abstractions.

In other words, what a programmer essentially has is a playfield. And whether the playfield is a stack of transistors or coding agents, when you program you create something new even if it's defined and built in terms of the playfield.

>I instead just fill my house/apartment with stuff already made and still feel like it's mine.

I'm starting to wonder if we lose something in all this convenience. Perhaps my life is better because I cook my own food, wash my own dishes, chop my own firewood, drive my own car, write my own software. Outwardly the results look better the more I outsource but inwardly I'm not so sure.

On the subject of furnishing your house the IKEA effect seems to confirm this.

https://en.wikipedia.org/wiki/IKEA_effect

I really appreciate this sentiment. It feels absolutely overwhelming the pace at which new tools and AI protocols are being released, leaving a feeling of constantly falling behind. But approaching from the other end, I can just make things that I do come up with and explore the new protocols only if I can't do the thing with what I've already grasped.

There are two stages to becoming a decent programmer: first you learn to use abstraction, then you learn when not to use abstraction.

Trying to find the right level is the art. Once you learn the tools of the trade and can do abstraction, it's natural to want to abstract everything. Most programmers go through such a phase. But sometimes things really are distinct and trying to find an abstraction that does both will never be satisfactory.

When building a house there are generally a few distinct trades that do the work: bricklayers, joiners, plumbers, electricians etc. You could try to abstract them all: it's all just joining stuff together isn't it? But something would be lost. The dangers of working with electricity are completely different to working with bricks. On the other hand, if people were too specialised it wouldn't work either. You wouldn't expect a whole gang of electricians, one who can only do lighting, one who can only do sockets, one who can only do wiring etc. After centuries of experience we've found a few trades that work well together.

So, yes, it's all just abstraction, but you can go too far.

Well said, great analogy. Sometimes the level of abstraction feels arbitrary - you have to understand the circumstances that led there to see why it's not.

In higher end work they do have specialized lighting, branch power, and feeder electricians. And among feeder even special ones for medium voltage etc

AI is not an abstraction.

> No, your favorite movie is not crap because the creators didn't grind their own lens.

One of the reasons Barry Lyndon is over 50 years old and still looks like no other movie today is because Kubrick tracked down a few lenses originally designed for NASA and had custom mounts built for them to use with cinema cameras.

https://neiloseman.com/barry-lyndon-the-full-story-of-the-fa...

> Popular and highly acclaimed games not at crap because they didn't write their own physics engine (Zelda uses Havok)

Super Mario Bros is known for having a surprisingly subtle and complex physics system that enabled the game to feel both challenging and fair even for players very new to consoles. Celeste a newer game also famous for being very difficult yet not feeling punishing does something similar:

https://maddymakesgames.com/articles/celeste_and_towerfall_p...

> or their own game engine (Plenty of great games use Unreal or Unity)

And Minecraft doesn't, which is why few other games at the time of its release felt and played like it.

You're correct that no one builds everything from scratch all the time. However, if all you ever do is cobble a few pre-made things together, I think you'll discover that nothing you make is ever that interesting or enduring in value. Sure, it can be useful, and satisfying. But the kinds of things that really leave a mark on people, that affect them deeply, always have at least some aspect where the creator got obsessive and went off the deep end and did their own thing from scratch.

Further, you'll never learn what a transformative experience it can be to be that creator who gets obsessive about a thing. You'll miss out on discovering the weird parts of your own soul that are more fascinated by some corner of the universe than anyone else is.

I have a lot of regrets in my life, but I don't regret the various times I've decided I've deeply dug into some thing and doing it from scratch. Often, that has turned out later to be some of the most long-term useful things I've done even though it seemed like a selfish indulgence at the time.

Of course, it's your life. But consider that there may be a hidden cost to always skimming along across the tops of the stacks of things that already exist out there. There is growth in the depths.

Did you not read the post? You're talking from the space of the Builder while neglecting the Thinker. That's fine for some people, but not for others.

In 30 years across 10 jobs, the companies I’ve worked for have not paid me to “code”. They’ve paid me to use my experience to add more business value than the total cost of employing me.

I’m no less proud of what I built in the last three weeks using three terminal sessions - one with codex, one with Claude, and one testing everything from carefully designed specs - than I was when I first booted a computer, did “call -151” to get to the assembly language prompt on my Apple //e in 1986.

The goal then was to see my ideas come to life. The goal now is to keep my customers happy, get projects done on time, on budget and meets requirements and continue to have my employer put cash in my account twice a month - and formerly put AMZN stock in my brokerage account at vesting.

But you can move a layer up.

Instead of pouring all of your efforts into making one single static object with no moving parts, you can simply specify the individual parts, have the machine make them for you, and pour your heart and soul into making a machine that is composed of thousands of parts, that you could never hope to make if you had to craft each one by hand from clay.

We used to have a way to do this before LLMs, of course: we had companies that employed many people, so that the top level of the company could simply specify what they wanted, and the lower levels only had to focus on making individual parts.

Even the person making an object from clay is (probably) not refining his own clay or making his own oven.

> we had companies that employed many people, so that the top level of the company could simply specify what they wanted, and the lower levels only had to focus on making individual parts.

I think this makes a perfect counter-example. Because this structure is an important reason for YC to exist and what the HN crowd often rallies against.

Such large companies - generally - don't make good products. Large companies rarely make good products in this way. Most, today, just buy companies that built something in the GP's cited vein: a creative process, with pivots, learnings, more pivots, failures or - when successful - most often successful in an entirely different form or area than originally envisioned. Even the large tech monopolies of today originated like that. Zuckerberg never envisioned VR worlds, photo-sharing apps, or chat apps, when he started the campus-fotobook-website. Bezos did not have some 5d-chess blueprint that included the largest internet-infrastructure-for-hire when he started selling books online.

If anything, this only strengthens the point you are arguing against: a business that operates by a "head" "specifying what they want" and having "something" figure out how to build the parts, is historically a very bad and inefficient way to build things.

And therein lies the crux: some people love to craft each part themselves, whereas others love to orchestrate but not manufacture each part.

With LLMs and engineers often being forced by management to use them, everyone is pushed to become like the second group, even though it goes against their nature. The former group see the part as a means, whereas the latter view it as the end.

Some people love the craft itself and that is either taken away or hollowed out.

This is really what it’s about.

As someone that started with Machine Code, I'm grateful for compiled -even interpreted- languages. I can’t imagine doing the kind of work that I do, nowadays, in Machine Code.

I’m finding it quite interesting, using LLM-assisted development. I still need to keep an eye on things (for example, the LLM tends to suggest crazy complex solutions, like writing an entire control from scratch, when a simple subclass, and five lines of code, will work much better), but it’s actually been a great boon.

I find that I learn a lot, using an LLM, and I love to learn.

But we become watchers instead of makers.

There is a difference between cooking and putting a ready meal into the microwave.

Both satisfy your hunger but only one can give some kind of pride.

The same thing happens if you are the head cook in a restaurant.

If you are a cook wanting to open a restaurant, you will be delegating, the same thing with AI. If you are fine only doing what your hands can possibly do in the time allotted, go ahead and cook in your kitchen.

But I need to make money to be able to trade for the food I eat.

You will make money but the others are the artists.

That’s the whole point. You become a customer of an AI service, you get what you want but it wasn’t done by you. You get money but not the feeling of accomplishment from cracking a problem. Like playing a video game following a solution or solving a crossword puzzle with google.

Check this out: https://imgur.com/a/aVxryBf

It's a carved wooden dragon that my dad got from Indonesia (probably about 50 years ago).

It's hard to appreciate, if you aren't holding it, but it weighs a lot, and is intricately carved, all over.

I guarantee that the carver used a Dremel.

I still have a huge amount of respect for their work. That wood is like rock. I would not want to carve it with hand tools.

There's just some heights we can't reach, without a ladder.

What good is a “feeling of accomplishment” as I am on the street homeless, hungry and naked?

Pretty B/W view. The feeling of accomplishment is the part that makes a job interesting, if it’s just about money it becomes dull.

And don’t forget, it’s more likely to find someone cheaper who can write the same prompts as you than people with the same kind of experience in cracking problems.

To tackle the second part first, do you think creating finely crafted bespoke code is going to save a mid level ticket taker (not referring to you of course) who can take well defined requirements and create code is going to save anyone’s job - ie “a human LLM”?

Those types of developers on the enterprise dev side - where most developers work - were becoming a commodity a decade ago and wages have been basically stagnant. Now those types of developers are finding it hard to stand out and get noticed.

The trick is to move “up the stack” and closer to the customer whether that be an internal customer or external customer and be able to work at a higher level of scope, impact and ambiguity.

https://www.levels.fyi/blog/swe-level-framework.html

It’s been well over a decade and 6 jobs ago that I had to do a coding interview to prove I was able “to codez real gud”, every job I’ve had since then has been more concerned with whether I was “smart and get things done”. That could mean coding, leading teams, working with “the business”, being on Zoom calls with customers, flying out to the customers site, or telling a PE backed company with low margins that they didn’t need a team of developers, they needed to outsource complete implementations to other companies.

I’ve always seen coding as grunt work. But the only way to go from requirements -> architectural vision -> result and therefore getting money in my pocket.

My vision was based on what I could do myself in the allotted time at first and then what I could do with myself + leading a team. Now it’s back to what I can do by myself + Claude Code and Codex.

As far as the first question, my “fun” during my adult life has come from teaching fitness classes until I was 35 and running with friends in charity races on the weekend, and just hanging out, spending time with my (now grown) stepsons after that and for the past few years just spending time with my wife and traveling, concerts, some “digital nomadding” etc

Eh. I've had pride in my work for over 40 years.

The tools change, but the spirit only grows.

[dead]

Yes, but bad ingredients do not make a yummy pudding.

Or, it's like trying to make a MacBook Pro by buying electronics boards from AliExpress and wiring them together.

I'd rather have a laptop made from AliExpress components than only have a single artisanal hand-crafted resistor.

That's a false dichotomy, because transistors and ICs are manufactured to be deterministic and nearly perfect. LLMs can never be guaranteed to be like that.

Yes, some things are better when manufactured in highly automated ways (like computer chips), but their design has been thoroughly tested and before shipping the chips themselves go through lots of checks to make sure they are correct. LLM code is almost never treated that way today.

Yes, the point is that only if you're willing to accept crappy results then you can use AI to build bigger things.

To me that seems like a spurious (maybe even false) dichotomy. You can have crappy results without AI. And you can have great results with AI.

Your contrast is an either or, that - in the real world - does not exist.

Take content written by AI, prompted by a human. A lot of it is slop and crap. And there will be more slop and crap with AI than before. But that was the case, when the medium changed from hand writen to printed books. And when paper and printing became cheap, we had slop like those 10 Cent Western or Romance novellas.

We also still had Goethe, still had Kleist, still had Grass (sorry, very German centric here).

We also have Inception vs. the latest sequel of any Marvel franchise.

I have seen AI writen, but human prompted short stories, that made people well up and find ideas presented in a light not seen before. And I have seen AI generated stories that one wants to purge from my brain.

It isn't the tool - it is the one yielding it.

Question: Did photoshop kill photography? Because honestly, this AI discussion to me sounds very much like the discussion back then.

> Question: Did photoshop kill photography? Because honestly, this AI discussion to me sounds very much like the discussion back then.

It killed an aspect of it. The film processing in the darkroom. Even before digital cameras were ubiquitous it was standard to get a scan before doing any processing digitally. Chemical processing was reduced the minimum necessary.

[deleted]

Lightroom killed photography.

I was going to reply defending AI tooling and crappy results, but I think I'm done with it.

I think there are just a class of people know that think that you cannot get 'macbook' quality with a LLM. I don't know why I try to convince them, it's not in my benefit.

[deleted]

It's more like the chess.com vs lichess example in my mind. On the one hand you have a big org, dozens of devs, on the other you have one guy doing a better job.

It's amazing what one competent developer can do, and it's amazing how little a hundred devs end up actually doing when weighed down by beaurocracy. And lets not pretend even half of them qualify as competent, not to mention they probably don't care either. They get to work and have a 45 min coffee break, move some stuff around in the Kanban board, have another coffee break, then lunch, then foosball etc. Ad when they actually write some code it's ass.

And sure, for those guys maybe LLMs represent a huge productivity boost. For me it's usually faster to do the work myself than to coax the bot into creating something acceptable.

Agreed. Most people don't do anything and this might actually get them to produce code at an acceptable rate. I find that I often know what I need to do and just hitting the LLM until it does what I want is more work than writing the damn code (the latter also being a better way to be convinced that it works, since you actually know what it does and how). People are very bad code reviewers, especially those people who don't do anything, so making them full time code reviewers always seemed very odd to me.

Supposedly when Michelangelo was asked about how he created the statue of David, he said "I just chipped away everything that wasn’t David.”

Your work is influenced by the medium by which you work. I used to be able to tell very quickly if a website was developed in Ruby on Rails, because some approaches to solve a problem are easy and some contain dragons.

If you are coding in clay, the problem is getting turned into a problem solvable in clay.

The challenge if you are directing others (people or agents) to do the work is that you don't know if they are taking into account the properties of the clay. That may be the difference between clean code - and something which barely works and is unmaintainable.

I'd say in both cases of delegation, you are responsible for making sure the work is done correctly. And, in both cases, if you do not have personal experiences in the medium you may not be prepared to judge the work.

This is an amazing quote - thank you. This is also my argument for why I can't use LLMs for writing (proofreading is OK) - what I write is not produced as a side-effect of thinking through a problem, writing is how I think through a problem.

Counterpoint (more devil's advocate), I'd argue it's better than an LLM writes something (e.g. the solution or thinking through of a problem) than nothing at all.

Counterpoint to my own counterpoint, will anyone actually (want to) read it?

counterpoint to the third degree, to loop it back around, an LLM might and I'd even argue an LLM is better at reading and ingesting long text (I'm thinking architectural documentation etc) than humans are. Speaking for myself, I struggle to read attentively through e.g. a document, I quickly lose interest and scan read or just focus on what I need instead.

I kinda saw this happen in realtime on reddit yesterday. Someone asked for advice on how to deal with a team that was in over their heads shipping slop. The crux of their question was fair, but they used a different LLM to translate their original thoughts from their native language into English. The prompt was "translate this to english for a reddit post" - nothing else.

The LLM adding a bunch of extra formatting to add emphasis and structure to what might have originally been a bit of a ramble, but obviously human written. The comments absolutely lambasted this OP for being a hypocrite complaining about their team using AI, but then seeing little problem with posting what is obviously an AI generated question because the OP didn't deem their English skills good enough to ask the question directly.

I'm not going to pass judgement on this scenario, but I did think the entire encounter was a "fun" anecdote in addition to your comments.

Edit: wrods

I saw the same post and was a bit saddened that all the comments seemed to be focused on the implied hypocrisy of the OP instead of addressing the original concern.

As someone that’s a bit of a fence-sitter on the matter of AI, I feel that using it in the way that OP did is one of the less harmful or intrusive uses.

I see it as worse because you could have put just as much effort in - less even - and gotten a better result just sticking it in a machine translator and pasting that.

Writing is how I think through a problem too, but that also applies to writing and communicating with an AI coding agent. I don't need to write the code per se to do the thinking.

You could write pseudocode as well. Bit fo someone who is familiar with a programming language, it’s just faster to use the latter. And if you’re really familiar with the language, you start thinking in it.

I personally have found success with an approach that's the inverse of how agents are being used generally.

I don't allow my agent to write any code. I ask it for guidance on algorithms, and to supply the domain knowledge that I might be missing. When using it for game dev for example, I ask it to explain in general terms how to apply noise algorithms for procedural generation, how to do UV mapping etc, but the actual implementation in my language of choice is all by hand.

Honestly, I think this is a sweet spot. The amount of time I save getting explanations of concepts that would otherwise get a bit of digging to get is huge, but I'm still entirely in control of my codebase.

Yep, this is the sweet spot. Though I still let it type code a lot - boilerplate stuff I’d be bored out of my mind typing. And I’ve found it has an extremely high success rate typing that code on top of its very easy for me to review that code. No friction at all. Granted this is often no larger than 100 lines or so (across various files).

If it takes you more than a few seconds or so to understand code an agent generated you’re going to make mistakes. You should know exactly what it’s going to produce before it produces it.

Coding is not at all like working a lump of clay unless you’re still writing assembly.

You’re taking a bunch of pre-built abstractions written by other people on top of what the computer is actually doing and plugging them together like LEGOs. The artificial syntax that you use to move the bricks around is the thing you call coding.

The human element of discovery is still there if a robot stacks the bricks based on a different set of syntax (Natural Language), nothing about that precludes authenticity or the human element of creation.

It depends what you're doing not really what you do it with.

I can do some crud apps where it's just data input to data store to output with little shaping needed. Or I can do apps where there's lots of filters, actions and logic to happen based on what's inputted that require some thought to ensure actually solve the problem it's proposed for.

"Shaping the clay" isn't about the clay, it's about the shaping. If you have to make a ball of clay and also have to make a bridge of Lego a 175kg human can stand on, you'll learn more about Lego and building it than you will about clay.

Get someone to give you a Lego instruction sheet and you'll learn far less, because you're not shaping anymore.

> You’re taking a bunch of pre-built abstractions written by other people on top of what the computer is actually doing and plugging them together like LEGOs.

Correct. However, you will probably notice that your solution to the problem doesn't feel right, when the bricks that are available to you, don't compose well. The AI will just happily smash together bricks and at first glance it might seem that the task is done.

Choosing the right abstraction (bricks) is part of finding the right solution. And understanding that choice often requires exploration and contemplation. AI can't give you that.

Not yet, anyway; I do trust LLMs for writing snippets or features at this point, but I don't trust them for setting up new applications, technology choices, architectures, etc.

The other day people were talking about metrics, the amount of lines of code people vs LLMs could output in any given time, or the lines of code in an LLM assisted application - using LOC as a metric for productivity.

But would an LLM ever suggest using a utility or library, or re-architecture an application, over writing their own code?

I've got a fairly simple application, renders a table (and in future some charts) with metrics. At the moment all that is done "by hand", last features were stuff like filtering and sorting the data. But that kind of thing can also be done by a "data table" library. Or the whole application can be thrown out in favor of a workbook (one of those data analysis tools, I'm not at home in that are at all). That'd save hundreds of lines of code + maintenance burden.

I was creating a Jira/bb wrapper with node recently and Claude actually used plenty of libraries to solve some tasks.

Same with gpt, but I felt it was more like "hey, everyone uses that, so why not me" than finding the right tool for the job. Can't say for Claude.

Unless you limit your scope of problem solving to only what you can do yourself, you are going to have to delegate work - your abstraction is going to be specs and delegating work to other people and ensuring it works well together and follows the specs - just like working with an LLm.

Exactly, and that's why I find AI coding solving this well, because I find it tedious to put the bricks together for the umpteenth time when I can just have an AI do it (which I will of course verify the code when it's done, not advocating for vibe coding here).

This actually leaves me with a lot more time to think, about what I want the UI to look like, how I'll market my software, and so on.

> Coding is not at all like working a lump of clay unless you’re still writing assembly.

Isn't the analogy apt? You can't make a working car using a lump of clay, just a car statue, a lump of clay is already an abstraction of objects you can make in reality.

Bingo.

Lego boxes include a set of instructions that implies there's only one way to assemble the contents, but that's sometimes an injustice to the creative space that Legos are built to provide. There can be a joy in algorithmically building the thing some other designers worked to make look nice, but there's a creative space outside the instructions, too.

The risk of LLMs laying more of these bricks isn't just loss of authenticity and less human elements of discovery and creation, it's further down the path of "there's only one instruction manual in the Lego box, and that's all the robots know and build for you". It's an increased commodification of a few legacy designers' worth of work over a larger creative space than at first seems apparent.

I think the analogy to high level programming languages misunderstands the value of abstraction and notation. You can’t reason about the behavior of an English prompt because English is underspecified. The value of code is that it has a fairly strong semantic correlation to machine operations, and reasoning about high level code is equivalent to reasoning about machine code. That’s why even with all this advancement we continue to check in code to our repositories and leave the sloppy English in our chat history.

Yep. Any statement in python or others can be mapped to something that the machine will do. And it will be the same thing every single time (concurrency and race issue aside). There’s no english sentence that can be as clear.

We’ve created formal notation to shorten writing. And computation is formal notation that is actually useful. Why write pages of specs when I could write a few lines of code?

There's also creative space inside the formal notation. It's not just "these are the known abstractions, please lego them together", the formal syntax and notation is just one part of the whole. The syntax and notation define the forms of poetry (here's the meter, here's the rhyme scheme, here's how the whitespace works), but as software developers we're still filling in the words that fit that meter and rhyme scheme and whitespace. We're adding the flowery metaphors in the way we choose variable names and the comments we choose to add and the order we define things or choose to use them.

Software developers can use the exact same "lego block" abstractions ("this code just multiplies two numbers") and tell very different stories with it ("this code is the formula for force power", "this code computes a probability of two events occurring", "this code gives us our progress bar state as the combination of two sub-processes", etc).

LLMs have only so many "stories" they are trained on, and so many ways of thinking about the "why" of a piece of code rather than mechanical "what".

Computers only care about the what, and have no use for the why. Humans care about the latter too and the programmer lives at the intersection of both. Taking a why and transforming it into a what is the coding process.

Software engineering is all about making sure the what actually solves the why, making the why visible enough in the what so that we can modify the latter if the former changes (it always does).

Current LLM are not about transforming a why into a what. It’s about transforming an underspecified what into some what that we hope fits the why. But as we all know from the 5 Why method, why’s are recursive structure, and most software engineer is about diving into the details of the why. The what are easy once that done because computers are simple mechanisms if you chose the correct level of abstraction for the project.

> plugging them together like LEGOs

Aren't Legos known for their ability to enable creativity and endless possibilities? It doesn't feel that different from the clay analogy, except a bit coarser grained.

[deleted]

You’re both right. It just depends on the problems you’re solving and the languages you use.

I find languages like JavaScript promote the idea that of “Lego programming” because you’re encouraged to use a module for everything.

But when you start exploring ideas that haven’t been thoroughly explored already, and particularly in systems languages which are less zealous about DRY (don’t repeat yourself) methodologies, the you can feel a lot more like a sculptor.

Likewise if you’re building frameworks rather than reusing them.

So it really depends on the problems you’re solving.

For general day-to-day coding for your average 9-to-5 software engineering job, I can definitely relate to why people might think coding is basically “LEGO engineering”.

changing "clay" for "legos" doesn't change the core argument. The tactile feel you get for the medium as you work it with your hands and the "artificial syntax" imposed by the medium.

> Being handed a baked and glazed artefact that approximates what you thought you wanted to make

Isn't this also an overstatement, and the problem is worse. That is - the code being handed back is a great prototype, needs polishing/finishing, and is ignorant of obvious implicit edge cases unless you explicitly innumerate all of them in your prompts??

For me, the state of things reminds me of a bad job I had years ago.

Worked with a well-regarded long tenured but truculent senior engineer who was immune to feedback due to his seniority. He committed code that either didn't run, didn't past tests, or implemented only the most obvious happy path robotically literal interpretation of requirements.

He was however, very very fast... underbidding teammates on time estimates by 10x.

He would hand back the broken prototype and we'd then spend the 10x time making his code actually something you can run in production.

Management kept pushing this because he had a great reputation, promised great things, and every once in a while did actually deliver stuff fast. It took years for management to come around to the fact that this was not working.

For me it’s a related but different worry. If I’m no longer thinking deeply, then maybe my thinking skills will simply atrophy and die. Then when I really need it, I won’t have it. I’ll be reduced to yanking the lever on the AI slot machine, hoping it comes up with something that’s good enough.

But at that point, will I even have the ability to distinguish a good solution from a bad one? How would I know, if I’ve been relying on AI to evaluate if ideas are good or not? I’d just be pushing mediocre solutions off as my own, without even realising that they’re mediocre.

I relate to this. But also, isn't it just that every human endeavor goes through an evolution from craft to commodity, which is sad for the craftsmen but good for everyone else, and that we happen to be the ones living through that for software?

For instance, I think about the pervasive interstate overpass bridge. There was a time long ago when building bridges was a craft. But now I see like ten of these bridges every day, each of which is better - in the sense of how much load they can support and durability and reliability - than the best that those craftsmen of yore could make.

This doesn't mean I'm in any way immune to nostalgia. But I try to keep perspective, that things can be both sad and ultimately good.

If you're only building things that have been built before, then sure, though I'd argue we already had solutions for that before LLMs.

there is a presumption that the models we are using today are 'good enough'. by models I mean thinks like linkers and package managers, micro services and cluster management tools.

I personally think that we're not done evolving really, and to call it quits today would leave alot of efficiency and productivity on the table

While there is still a market for artisanal furniture, dishes and clothes most people buy mass-produced dishes, clothes and furniture.

I wonder if software creation will be in a similar place. There still might be a small market for handmade software but the majority of it will be mass produced. (That is, by LLM or even software itself will mostly go away and people will get their work done via LLM instead of "apps")

As with furniture, it's supply vs demand, and it's a discussion that goes back decades at this point.

Very few people (even before LLM coding tools) actually did low level "artisanal" coding; I'd argue the vast majority of software development goes into implementing features in b2b / b2c software, building screens, logins, overviews, detail pages, etc. That requires (required?) software engineers too, and skill / experience / etc, but it was more assembling existing parts and connecting them.

Years ago there was already a feeling that a lot of software development boiled down to taping libraries together.

Or from another perspective, replace "LLM" with "outsourcing".

I would argue the opposite..

What you get right now is mass replicated software, just another copy of sap/office/Spotify/whatever

That software is not made individually for you, you get a copy like millions of other people and there is nearly no market anymore for individual software.

Llms might change that, we have a bunch of internal apps now for small annoying things..

They all have there quirks, but are only accessible internally and make life a little bit easier for people working for us.

Most of them are one shot llms things, throw away if you do not need it anymore or just one shoot again

The question is whether that's a good thing or not; software adages like "Not Invented Here" aren't going to go away. For personal tools / experiments it's probably fine, just like hacking together something in your spare time, but it can become a risk if you, others, or a business start to depend on it (just like spare time hacked tools).

I'd argue that in most cases it's better to do some research and find out if a tool already exists, and if it isn't exactly how you want it... to get used to it, like one did with all other tools they used.

> it can become a risk if you, others, or a business start to depend on it (just like spare time hacked tools).

So that Excel spreadsheet that manages the entire sales funnel?

Acceptance of mass production is only post establishment of quality control.

Skipping over that step results in a world of knock offs and product failures.

People buy Zara or H&M because they can offload the work of verifying quality to the brand.

This was a major hurdle that mass manufacturing had to overcome to achieve dominance.

>Acceptance of mass production is only post establishment of quality control.

Hence why a lot of software development is gluing libraries together these days.

This makes no sense to me. There are plenty of artists out there (e.g. El Anatsui), not to mention whole professions such as architects, who do not interact directly with what they are building, and yet can have profound relationship with the final product.

Discovering the right problem to solve is not necessarily coupled to being "hands on" with the "materials you're shaping".

In my company, [enterprise IT] architects are separated into two kinds. People with a CV longer than my arm who know/anticipate everything that could fail and have reached a level of understandind that I personnally call "wisdom". And theorists, who read books and norms, who focus mostly on the nominal case, and have no idea [and no interest] in how the real world will be a hard brick wall that challenges each and every idea you invent.

Not being hands-on, and more important not LISTENING to the hands-on people and learning from them, is a massive issue in my surroundings.

So thinking hard on something is cool. But making it real is a whole different story.

Note: as Steve used to say, "real artists ship".

you think El Anatsui would concur that they didn't interact directly with what they were building? "hands on", "material you're shaping" is a metaphor

I don't see why his involvement, explaining to his team how exactly to build a piece, is any different from a developer explaining to an LLM how to build a certain feature, when it comes to the level of "being hands on".

Obviously I am not comparing his final product with my code, I am simply pointing out how this metaphor is flawed. Having "workers" shape the material according to your plans does not reduce your agency.

> I don't see why his involvement, explaining to his team how exactly to build a piece, is any different from a developer explaining to an LLM

Because everyone under him knows that a mistake big enough is a quick way to unemployment or legal actions. So the whole team is pretty much aligned. A developer using an LLM may as well try to herd cats.

First, that's quite a sad view of incentives structures. Second, you can't be serious in thinking that "worker worried they might be fired" puts the person in charge closer to the "materials" and more "hands on" with the project.

Having a background in fine art (and also knew Aral many years ago!), this prose resonates heavily with me.

Most of the OP article also resonated with me as I bounce back and forth between learning (consuming, thinking, pulling, integrating new information) to building (creating, planning, doing) every few weeks or months. I find that when I'm feeling distressed or unhappy, I've lingered in one mode or the other a little too long. Unlike the OP, I haven't found these modes to be disrupted by AI at all, in fact it feels like AI is supporting both in ways that I find exhilarating.

I'm not sure OP is missing anything because of AI per se, it might just be that they are ready to move their focus to broader or different problem domains that are separate from typing code into an IDE?

For me, AI has allowed me to probe into areas that I would have shied away from in the past. I feel like I'm being pulled upward into domains that were previously inaccessible.

I use Claude on a daily basis, but still find myself frequently hand-writing code as Claude just doesn't deliver the same results when creating out of whole cloth.

Claude does tend to make my coarse implementations tighter and more robust.

I admittedly did make the transition from software only to robotics ~6 years ago, so the breadth of my ignorance is still quite thrilling.

>> Coding is like

That description is NOT coding, coding is a subset of that.

Coding comes once you know what you need to build, coding is the process of you expressing that in a programming language and as you do so you apply all your knowledge, experience and crucially your taste, to arrive at an implementation which does what's required (functionally and non-functionally) AND is open to the possibility of change in future.

Someone else here wrote a great comment about this the other day and it was along the lines of if you take that week of work described in the GP's comment, and on the friday afternoon you delete all the code checked in. Coding is the part to recreate the check in, which would take a lot less than a week!

All the other time was spent turning you into the developer who could understand why to write that code in the first place.

These tools do not allow you to skip the process of creation. They allow you to skip aspects of coding - if you choose to, they can also elide your tastes but that's not a requirement of using them, they do respond well to examples of code and other directions to guide them in your tastes. The functional and non-functional parts they're pretty good at without much steering now but i always steer for my tastes because, e.g. opus 4.5 defaults to a more verbose style than i care for.

It's all individual. That's like saying writing only happens when you know exactly the story to tell. I love open a blank project with a vague idea of what I want to do, and then just start exploring while I'm coding.

I'm sure some coding works this way, but I'd be surprised if it's more than a small percentage of it.

I get what he's pointing at: building teaches you things the spec can't, and iteration often reveals the real problem.

That said, the framing feels a bit too poetic for engineering. Software isn't only craft, it's also operations, risk, time, budget, compliance, incident response, and maintenance by people who weren't in the room for the "lump of clay" moment. Those constraints don't make the work less human; they just mean "authentic creation" isn't the goal by itself.

For me the takeaway is: pursue excellence, but treat learning as a means to reliability and outcomes. Tools (including LLMs) are fine with guardrails, clear constraints up front and rigorous review/testing after, so we ship systems we can reason about, operate, and evolve (not just artefacts that feel handcrafted).

> That said, the framing feels a bit too poetic for engineering.

I wholeheartedly disagree but I tend to believe that's going to be highly dependent on what type of developer a person is. One who leans towards the craftsmanship side or one who leans towards the deliverables side. It will also be impacted by the type of development they are exposed to. Are they in an environment where they can even have a "lump of clay" moment or is all their time spent on systems that are too old/archaic/complex/whatever to ever really absorb the essence of the problem the code is addressing?

The OP's quote is exactly how I feel about software. I often don't know exactly what I'm going to build. I start with a general idea and it morphs towards excellence by the iteration. My idea changes, and is sharpened, as it repeatedly runs into reality. And by that I mean, it's sharpened as I write and refactor the code.

I personally don't have the same ability to do that with code review because the amount of time I spend reviewing/absorbing the solution isn't sufficient to really get to know the problem space or the code.

"The muse visits during the act of creation, not before. Start alone."

That has actually been a major problem for me in the past where my core idea is too simple, and I don't give "the muse" enough time to visit because it doesn't take me long enough to build it. Anytime I have given the muse time to visit, they always have.

The best analogy I think is, if you just take Stack Overflow code solutions, smoosh over your code and hit compile / build, and move on without ever looking at "why it works" you're really not using your skills to the best of your ability, and it could introduce bugs you didn't expect, or completely unnecessary dependencies. With Stack Overflow you can have other people pointing out the issues with the accepted answer and giving you better options.

This keeps coming up again and again and again, but like how many times were you able to copy paste SO solution wholesale and just have it work? Other than for THE most simple cases (usually CSS) there would always have to be some understanding involved. Of course you don't always learn deeply every time, but the whole "copy paste off of stackoverflow" was always an exaggeration that is being used in seeming earnest.

It's very similar now, you have to surf a swell of selective ignorance that is (feels?) less reliable than the ignorance that one adopts when using a dependency one hasn't read and understood the source code for.

One must be conversant in abstractions that are themselves ephemeral and half hallucinated. It's a question of what to cling to, what to elevate beyond possible hallucinated rubbish. At some level it's a much faster version of the meastspace process and it can be extermely emotionally uncomfortable and anarchic to many.

Sometimes you want an artistic vase that captures some essential element of beauty, culture, or emotion.

Sometimes you want a utilitarian teapot to reliably pour a cup of tea.

The materials and rough process for each can be very similar. One takes a master craftsman and a lot of time to make and costs a lot of money. The other can be made on a production line and the cost is tiny.

Both have are desirable, for different people, for different purposes.

With software, it's similar. A true master knows when to get it done quick and dirty and when to take the time to ponder and think.

> Sometimes you want a utilitarian teapot to reliably pour a cup of tea.

If you pardon the analogy, watch how Japanese make a utilitarian teapot which reliably pours a cup of tea.

It's more complicated and skill-intensive than it looks.

In both realms, making an artistic vase can be simpler than a simple utilitarian tool.

AI is good at making (poor quality, arguably) artistic vases via its stochastic output, not highly refined, reliable tools. Tolerances on these are tighter.

There is a whole range of variants in between those two "artistic vs utilitarian" points. Additionally, there is a ton of variance around "artistic" vs "utilitarian".

Artisans in Japan might go to incredible lengths to create utilitarian teapots. Artisans who graduated last week from a 4-week pottery workshop will produce a different kind quality, albeit artisan. $5.00 teapots from an East Asian mass production factory will be very different than high quality mass-produced upmarket teapots at a higher price. I have things in my house that fall into each of those categories (not all teapots, but different kinds of wares).

Sometimes commercial manufacturing produces worse tolerances than hand-crafting. Sometimes, commercial manufacturing is the only way to get humanly unachievable tolerances.

You can't simplify it into "always" and "never" absolutes. Artisan is not always nicer than commercial. Commercial is not always cheaper than artisan. _____ is not always _____ than ____.

If we bring it back to AI, I've seen it produce crap, and I've also seen it produce code that honestly impressed me (my opinion is based on 24 years of coding and engineering management experience). I am reluctant to make a call where it falls on that axis that we've sketched out in this message thread.

This is very insightful, thanks. I had a similar thought regarding data science in particular. Writing those pandas expressions by hand during exploration means you get to know the data intimately. Getting AI to write them for you limits you to a superficial knowledge of said data (at least in my case).

Thanks for the quote, it definitely resonates. Distressing to see many people who can't relate to this, taking it literally and arguing that there is nothing lost the more removed they are from the process.

Honestly this sounds like a Luddite mindset (and I mean that descriptively, not to be insulting). This mindset holds us back.

You can imagine the artisans who made shirts saying the exact same thing as the first textile factories became operational.

Humans have been coders in the sense we mean for a matter of decades at most - a blip in our existence. We’re capable of far more, and this is yet another task we should cast into the machine of automation and let physical laws do the work for us.

We’re capable of manipulating the universe into doing our bidding, including making rocks we’ve converted into silicones think on our behalf. Making shirts and making code: we’re capable of so much more.

yes, this is maybe it's my preference to jump directly to coding, instead of canva to draw the gui and stuff. i would not know what to draw because the involvemt is not so deep ...or something

That quote sounds like special pleading for artisans.

reminds of arguments for - hosting a server vs running stuff in cloud - vps vs containers

This is cute, but this is true for ALL activities in life. I have to constantly remind my brother that his job is not unique and if he took a few moments, he might realize, flipping burgers is also molding lumps of clay.

I think the biggest beef I have with Engineers is that for decades they more or less reduced the value of other lumps of clay and now want to throw up arms when its theirs.

I love Aral, he is so invested.

Yeah? And then you continue prompting and developing, and go through a very similar iterative process, except now it's faster and you get to tackle more abstract, higher level problems.

"Most developers don't know the assembly code of what they're creating. When you skip assembly you trade the very thing you could have learned to fully understand the application you were trying to make. The end result is a sad simulacrum of the memory efficiency you could have had."

This level of purity-testing is shallow and boring.

I don't think this comparison holds up. With a higher-level language, the material you're building with is a formal description of the software, which can be fed back into a compiler to get a deterministic outcome.

With an LLM, you put in a high-level description, and then check in the "machine code" (generated code).

This is beautifully written, but as a point against agentic AI coding, I just don't really get it.

It seems to assume that vibe coding or like whatever you call the Gas Town model of programming is the only option, but you don't have to do that. You don't have to specify upfront what you want and then never change or develop that as you go through the process of building it, and you don't have to accept whatever the AI gives you on the other end as final.

You can explore the affordances of the technologies you're using, modify your design and vision for what you're building as you go; if anything, I've found AI coding mix far easier to change and evolve my direction because it can update all the various parts of the code that need to be updated when I want to change direction as well as keeping the tests and specification and documentation in sync, easily and quickly.

You also don't need to take the final product as a given, a "simulacrum delivered from a vending machine": build, and then once you've gotten something working, look at it and decide that it's not really what you want, and then continue to iterate and change and develop it. Again, with AI coding, I've found this easier than ever because it's easier to iterate on things. The process is a bit faster for not having to move the text around and looking up API documentation myself, even though I'm directly dictating the architecture and organization and algorithms and even where code should go most of the time.

And with the method I'm describing, where you're in the code just as much as the AI is, just using it to do the text/API/code munging, you can even let the affordances of not just the technologies, but the source code and programming language itself effect how you do this: if you care about the code quality and clarity and organization of the code that the AI is generating, you'll see when it's trying to brute force its way past technical limitations and instead redirect it to follow the grain. It just becomes easier and more fluid to do that.

If anything, AI coding in general makes it easier to have a conversation with the machine and its affordances and your design vision and so on, then before because it becomes easier to update everything and move everything around as your ideas change.

And nothing about it means that you need to be ignorant of what's going on; ostensibly you're reviewing literally every line of code it creates and deciding what libraries and languages as well as the architecture, organization and algorithms it's using. You are aren't you? So you should know everything you need to know. In fact, I've learned several libraries and a language just from watching it work, enough that I can work with them without looking anything up, even new syntax and constructs that would have been very unfamiliar prior on my manual coding days.

I have no idea who this guy is (I guess he's a fantasy novelist?) but this video came up in my YouTube feed recently and feels like it matches closely with the themes you're expressing. https://youtu.be/mb3uK-_QkOo?si=FK9YnawwxHLdfATv

I dunno, when you've made about 10,000 clay pots its kinda nice to skip to the end result, you're probably not going to learn a ton with clay pot #10,001. You can probably come up with some pretty interesting ideas for what you want the end result to look like from the onset.

I find myself being able to reach for the things that my normal pragmatist code monkey self would consider out of scope - these are often not user facing things at all but things that absolutely improve code maintenance, scalability, testing/testability, or reduce side effects.

Depends on the problem. If the complexity of what you are solving is in the business logic or, generally low, you are absolutely right. Manually coding a signup flow #875 is not my idea of fun either. But if the complexity is in the implementation, it’s different. Doing complex cryptography, doing performance optimization or near-hardware stuff is just a different class of problems.

> If the complexity of what you are solving is in the business logic or, generally low, you are absolutely right.

The problem is rather that programmers who work on business logic often hate programmers who are actually capable of seeing (often mathematical) patterns in the business logic that could be abstracted away; in other words: many business logic programmers hate abstract mathematical stuff.

So, in my opinion/experience this is a very self-inflected problem that arises from the whole culture around business logic and business logic programming.

Coding signup flow #875 should as easy as using a snippet tool or a code generator. Everyone that explains why using an LLM is a good idea always sound like living in the stone age of programming. There are already industrial level tools to get things done faster. Often so fast that I feel time being wasted describing it in english.

Of course I use code generation. Why would that be mutually exclusive from AI usage? Claude will take full advantage of it with proper instruction.

In my experience AI is pretty good at performance optimizations as long as you know what to ask for.

Can't speak to firmware code or complex cryptography but my hunch is if it's in it's training dataset and you know enough to guide it, it's generally pretty useful.

> my hunch is if it's in it's training dataset and you know enough to guide it, it's generally pretty useful.

Presumably humanity still has room to grow and not everything is already in the training set.

> In my experience AI is pretty good at performance optimizations as long as you know what to ask for.

This rather tells that the kind of performance optimizations that you ask for are very "standard".

Most optimizations are making sure you do not do work that is unnecessary or that you use the hardware effectively. The standard techniques are all you need 99% of the time you are doing performance work. The hard part about performance is dedicating the time towards it and not letting it regress as you scale the team. With AI you can have agents constantly profiling the codebase identifying and optimizing hotspots as they get introduced.

> Most optimizations are making sure you [...] use the hardware effectively.

If you really care about using the hardware effectively, optimizing the code is so much more than what you describe.

As most are

import claypot

trillion dollar industry boys

[dead]

> you're probably not going to learn a ton with clay pot #10,001

Why not just use a library at that point? We already have support for abstractions in programming.

Eloquent, moving, and more-or-less exactly what people said when cameras first hit the scene.

Ironic. The frequency and predictability of this type of response — “This criticism of new technology is invalid because someone was wrong once in the past about unrelated technology” — means there might as well be an LLM posting these replies to every applicable article. It’s boring and no one learns anything.

It would be a lot more interesting to point out the differences and similarities yourself. But then if you wanted an interesting discussion you wouldn’t be posting trite flamebait in the first place, would you?

Note that we still have not solved cameras or even cars.

The biggest lesson I am learning recently is that technologists will bend over backwards to gaslight the public to excuse their own myopia.

Interesting comparison. I remember watching a video on that. Landscape paintings, portraits, etc, was an art that has taken an enormous nosedive. We, as humans, have missed out on a lot of art because of the invention of the camera. On the other hand, the benefits of the camera need no elaboration. Currently AI had a lot of foot guns though, which I don't believe the camera had. I hope AI gets to that point too.

>We, as humans, have missed out on a lot of art because of the invention of the camera.

I so severely doubt this to the point I'd say this statement is false.

As we go toward the past art was expensive and rare. Better quality landscape/portraits were exceptionally rare and really only commissioned by those with money, which again was a smaller portion of the population in the time before cameras. It's likely there are more high quality paintings now per capita than there were ever in the past, and the issue is not production, but exposure to the high quality ones. Maybe this is what you mean by 'miss out'?

In addition the general increase in wealth coupled with the cost of art supplies dropping this opens up a massive room for lower quality art to fill the gap. In the past canvas was typically more expensive so sucky pictures would get painted over.

The footgun cameras had was exposure time.

1826 - The Heliograph - 8+ hours

1839 - The Daguerreotype - 15–30 Mins

1841 - The Calotype - 1–2 Mins

1851 - Wet Plate Collodion - 2–20 Secs

1871 - The Dry Plate - < 1 Second.

So it took 45 years to perfect the process so you could take an instant image. Yet we complain after 4 years of LLMs that they're not good enough.

> Eloquent, moving, and more-or-less exactly what people said when cameras first hit the scene.

This is a non sequitur. Cameras have not replaced paintings, assuming this is the inference. Instead, they serve only to be an additional medium for the same concerns quoted:

  The process, which is an iterative one, is what leads you 
  towards understanding what you actually want to make, 
  whether you were aware of it or not at the beginning.
Just as this is applicable to refining a software solution captured in code, just as a painter discards unsatisfactory paintings and tries again, so too is it when people say, "that picture didn't come out the way I like, let's take another one."

Photography’s rapid commercialisation [21] meant that many painters – or prospective painters – were tempted to take up photography instead of, or in addition to, their painting careers. Most of these new photographers produced portraits. As these were far cheaper and easier to produce than painted portraits, portraits ceased to be the privilege of the well-off and, in a sense, became democratised [22].

Some commentators dismissed this trend towards photography as simply a beneficial weeding out of second-raters. For example, the writer Louis Figuier commented that photography did art a service by putting mediocre artists out of business, for their only goal was exact imitation. Similarly, Baudelaire described photography as the “refuge of failed painters with too little talent”. In his view, art was derived from imagination, judgment and feeling but photography was mere reproduction which cheapened the products of the beautiful [23].

https://www.artinsociety.com/pt-1-initial-impacts.html#:~:te...

Cameras have not replaced paintings, assuming this is the inference.

You wouldn't have known that, going by all the bellyaching and whining from the artists of the day.

Guess what, they got over it. You will too.

What stole the joy you must have felt, fleetingly, as a child that beheld the world with fresh eyes, full of wonder?

Did you imagine yourself then, as your are now, hunched over a glowing rectangle. Demanding imperiously that the world share your contempt for the sublime. Share your jaundiced view of those that pour the whole of themselves into the act of creation, so that everyone might once again be graced with wonder anew.

I hope you can find a work of art that breaks you free of your resentment.

Thank you for brightening my morning with a brief moment of romantic idealism in a black ocean of cynicism

So I'm the cynic here. That's a hoot.

[flagged]

Thank you for the AI warning, so I didn't have to read that.

Ah well, I'm neurodivergent and it’s challenging for me to write a comment while remembering that others don’t have access to my thoughts and might interpret things differently. And it's too late to edit it now

What I wanted to show is that, clearly different from a camera or other devices, AI can copy originality. OPs comment was pretty original in it's wording, and gpt came pretty close imo. It really wasn't meant as a low effort comment

Plot twist. The comment you love is the cynical one, responding to someone who clearly embraces the new by rising above caution and concern. Your GPT addition has missed the context, but at least you've provided a nice little paradox.

What I especially enjoy is seeing those people accuse AI of being a "parrot" or a "mindless next-token predictor." Inevitably, these accusations are levied in comments whose every thought and token could have been lifted verbatim from any of a thousand such comments over the past few years, accompanied by the rusty squeak of goalpost wheels.

>> Cameras have not replaced paintings, assuming this is the inference.

> You wouldn't have known that, going by all the bellyaching and whining from the artists of the day.

> Guess what, they got over it.

You conveniently omitted my next sentence, which contradicts your position and reads thusly:

  Instead, they serve only to be an additional medium for the 
  same concerns quoted ...
> You will too.

This statement is assumptive and gratuitous.

Username checks out, at least.

> Username checks out, at least.

Thoughtful retorts such as this are deserving of the same esteem one affords the "rubber v glue"[0] idiom.

As such, I must oblige.

0 - https://idioms.thefreedictionary.com/I%27m+rubber%2c+you%27r...

Logic needs to be shown the door on occasion. Sometimes via the help of an ole Irish bar toss.

There are other sites. Other doors, on other bars.

> Guess what, they got over it. You will too.

Prediction is difficult, especially of the future.

It ain't over 'til it's over. And when you come to a fork in the road, take it.

Source?

Art history. It's how we ended up with Impressionism, for instance.

People felt (wrongly) that traditional representational forms like portraiture were threatened by photography. Happily, instead of killing any existing genres, we got some interesting new ones.

Yeah, and cameras changed art forever.

If you don't like change, then my recommendation is to steer clear of careers in either art or technology.

people still make clay pots and paint landscapes

Creativity is not what would expect out of the Renaissance