AI is just revealing the two types of people in this line of work. Those who don’t actually like software and just do it because it’s lucrative, and the actual nerds who care.

You are probably talking about people who just crunch out some half baked solutions for the sake of getting somewhere.

But there are other nerds who care, just not about the code quality, but about conversion, testing out business ideas quickly, getting to know their customers better.

There are nerds who care about business strategy.

There are nerds who care about accounting principles and clean financial reporting.

There are nerds who care about sales targets and partnerships.

There are many types of nerds out there. Don’t limit nerds to engineers, because “tech” world is not just an engineering world anymore. All these nerds you can team up with to build meaningful things, because they do care.

They very clearly weren't talking about nerds in general but rather nerds who care about software.

This resonates with me. I'm a Mechanical Engineer who loves the process of coding. I did take an intro to business class in undergraduate though, and my professor said one thing that has stuck with me for 30+ years - 'The fundamental goal of a business is to make profit now and in the future'. Vibe coded slop might get some traction and make money now, but high quality code will reduce technical debt and allow it to be made in the future. So, in some ways, both camps are right. The PM/Manager/VP want to make money now, but if they completely disregard the nerdy engineer, they will sabotage their future.

I see a disconnect between these two camps that will probably cause a lot of chaos in the near future. Those that figure it out will thrive.

But also time to market matters.

While Company A is building their product in perfect hand-coded Rust with zero defects, Company B is on their third iteration of vibe'd "slop" and getting actual customer feedback - which helps them iterate further.

It's mostly a matter whether Company B is smart enough to refactor the code to a stabler and more maintainable form or do they run headlong into a vibeslop wall.

A much more charitable framing: people who enjoy the process vs people who enjoy the result.

(Though, granted, the results are a lot better if you craft it by hand)

But business people always cared only about thr result. My PM (who speaks like a salesman) only cares about the results. My “head of” same. My ceo same. The only ones who ever cared about the process and quality were us the engineers… if we don’t have that care, well, to hell with everything

Assuming it is accurate, the logical conclusion is that the race is over. The management can get their $result and fast. Now, whether it is good or bad, is a separate story, and only time will tell whether they will be forced to learn anything. Right now, the expectation is to push for results and management seems to ascribe current set of failures to: people not embracing AI enough.

That's not true as a simple statement, many business people really do care about quality and process, and you may find you care much more about them than you think.

How often have engineers decried yet another rewrite that some project is doing? Or talked about "over-engineering" something that isn't needed, or that another person in a team has setup a full kubernetes gitops thing that's glorious to them but you just want to scp a go binary and be done with it?

I've seen truly excellent engineers hit this issue, I worked in a team years ago and people disagreed on the approach to take on a new project. So we all made a prototype and presented it, so we could pick a direction. There was a requirement that it be done in ruby since that was the language most of the developers were most fluent in. One of the engineers, remarkably smart, wrote a lisp interpreter in ruby so that technically it'd be "in ruby" but have the benefits of lisp.

He cared about the quality and process in one area. Deeply. However focussing on that would be at the detriment to the rest of the actual product we wanted to ship. If you considered the quality of the product as a whole and the process at the level of the organisation, you'd do something very different.

Now, none of this means all business people are good at this or long term vision or anything, just as it doesn't mean all engineers have a very narrow focus. But I've seen engineers focus on the quality or engineering of some component without looking at what it is you're actually trying to achieve as a business, and so push for a worse overall process and lower "quality" result. It's the same sort of disconnect that leads a lot of engineers to rail against meetings and PMs that slow them down without seeing from the other side that it's often better to build the right thing more slowly than the wrong thing more quickly.

The results being a lot better crafted by hand I would agree with, if one removes any notion of a time constraint. Sometimes the comparison point is between the LLM authored software or nothing at all.

> enjoy the process

This means different things to different people, lot of people enjoy the process of engineering solutions with LLM agents, build out tailored skilled, custom approaches that make up their own flavour "agentic" workflow. There are also people who find joy in Javascript that other people cannot understand why. And other people again love system languages or even tinkering with assembly etc.

What I wanted to say is that LLM use does not automatically mean people just want to get results faster, there are still nerds enjoying the process of working with these new tools.

I am not really sure. I wrote some scripts that aggregated data from several APIs with an LLM and the LLM had the foresight to create a caching layer for the API responses as it properly inferred that I would need the results over and over again as well as using asyncio to accelerate fetch speed. This would have been a v2 or v3 and it one-shotted it perfectly.

Yeah, they are good at applying generic patterns, but often it can be overkill/YAGNI that lead to more maintenance work in places that are fine with a much simpler/straightforward solution. But this is what the engineer can decide and with LLMs they wont be forced to make the trade off because it takes longer to build, but rather whether it is really necessary or not.

Can we build a list of the actual nerds who care? Need it for my future recruitment needs lol.

The benchmark is "do they do it for fun", i.e. personal projects.

But the real trick isn't "number of personal projects", but how weird they are. There's no "rational" reason to do them, they don't increase the person's marketability / hireability. They are done purely for intrinsic reasons.

(On reflection, this also seems to be a pretty robust predictor of autism. :)

I think there's a continuum here, too. I've heard it said, in jest, mind, that LLM's square the dev. It turns a 1.5x dev into a 2.25x dev, but it also turns a 0.75x dev into a ~0.56x dev.

I think the exponent of 2 is probably too high, but it's not a bad approximation of a very messy reality.

There is also the division of people who value the thing being produced vs. valuing the actual production of that thing, whether or not its used. I don't see one side here being "right", necessarily, but when a company is behind it one is certainly more valued, and I think not incorrectly.

This is such a naive take. Most of the nerdiest and most "quality" oriented engineers are hard leaning in to agentic coding. I feel like the most impressive engineers I know have always leaned in to learning how to "sharpen the axe" and AI is really the biggest axe we have seen.

Your category of "nerds who care" is actually "nerds who only want to be coders" and not "nerds who care about solving problems".

I take software engineering and production reliability very seriously. But coding is just a small part of my job. It's not really the meat and potatoes. I'll vibe code (responsibility) where I can.

I care a lot about software and I use LLMs extensively. There are some things I deeply understand yet I don't care for doing anymore because I've done them for years and there's nothing to be gained from doing them manually.

It goes for all professions really, people who do it for work and people who care. Apply to any profession, plumbers, doctors, carpenters, cleaners, etc etc. Most of us have experienced both types and I haven’t heard of anyone preferring the ”do it for work” over the ones who care. And like those other professions, in software we accept the worse of the two because finding people who care is both time consuming and often much more expensive.

>in software we accept the worse of the two

and the whole world suffers for it.

No disagreement from me

I've posited for a while now that the people who find spicy autocomplete to be exciting are the people who can't really do what it does.

I played with Image Playground last year some time. It was really fun. You know why? I can't draw, and I can't paint, to save my life. It's letting me do something I can't do well/at all on my own.

Using an LLM to do something I can do, with the caveat that it's pretty mediocre at the task, and needs to be constantly monitored to check it isn't doing stupid things? If I wanted that I'd just get an intern and watch them copy crappy examples from StackOverflow all day.

The same logic explains the use of LLM's to write emails/other long form text.

It makes accessible something that people otherwise cannot do well. Go look at submissions on community writing sites. The people who write because they're good at it, are adamant they don't use an LLM.

People use LLM's to do things they're otherwise not able to do. I will die on this hill.

Is your argument that there is no imaginable situation where someone who was competent at software development could find use for a semi-automated tool for writing software?

That would imply that either the person in question has infinite time, or has access to all software that could ever be of utility to them, which seems unlikely.

There's a reason I call it spicy autocomplete.

Which is what?

.... that an IDE providing a suggestion about what comes next as you type is not new, and the entire basis of how an LLM works is "what word probably comes next".

I'd have thought someone who's so enamoured with the tech would have at least a basic understanding of how it works.

Indeed. To be honest, I think everyone on HN is aware of how LLMs work at this point, it’s not actually adding a great deal to the discussion to keep going on about autocomplete or ‘stochastic parrots’.

"I've posited for a while now" and you post the most lukewarm and outdated take like it's an enlightenment. I've been coding for 20 years and can very well do everything the AI does, and so can all devs I know. We use it because it amplifies us, not because we couldn't otherwise. You've chosen a very ridiculous hill to die on.

Initially I wanted to write more but I can boil it down to taste and context mismatch. By that I mean some people see LLM output as tasteless or kitsch (which I ascribe to generally) and another set of people (though sometimes overlapping more often than not) hold disdain or at the very least look funny at heavy LLM users like gym-goers would look at someone in the middle of the gym loudly suggesting using a dolly or forklift instead of barbell training.

So yeah, I guess the value of doodles has shot up simply because of optics.

Somewhere else in this comment section someone tried to broaden the definition of nerd so much so that pretty much anybody who is a consummate professional is also a nerd. The hill I will die on is that people don't actually dislike all this new AI stuff but more so the attitude of people heavily invested in it.

And to add another data point regarding your hill my drawing/painting moment was NLP stuff. Now if I want to do (rudimentary) sentiment analysis or keyword extraction I can lean on a local LLM. Yet I don't go around yelling Snowball (I think?) is obsolete.

> more so the attitude of people heavily invested in it

Exactly.

LLM bros are just the new blockchain/crypto bros, but they aren't necessarily even writing their own spruiking comments any more.

While you are dying on a hill, with the help of LLMs, I'm shipping quality software and features to my customers at a pace I haven't been able to before. And no, not some nextjs slop. If you are letting your LLM look at StackOverflow, you are doing it wrong - it needs to be grounding in your stacks official docs and any other style/rules you prefer wired with other tooling like linting/formatting, duplication checking, etc. And yes, you have to constantly monitor the output and review every line of code - but it's still faster and if managed correctly, produces better code and (this is the hill I will die on) better test suites and documentation than I would have written.

> If you are letting your LLM look at StackOverflow, you are doing it wrong

So you've evaluated all the sources that the model was trained on initially have you? How long did that take you?

> I'm shipping quality software and features to my customers at a pace I haven't been able to before.

I'm sorry are you agreeing with me or not? It sounds like you're agreeing with me.

I’m just saying that you can’t just let it rip based on its training alone, it needs to be grounded and harnessed in stack specific tooling.

I'd be more general and say it needs verification to guide it, and narrowed scope so it doesn't wander off. How those get provided can vary. While I can do what I'm asking it to do, and have so many times that I don't want to anymore, I can't do it as fast as it can. But as someone said, it is stupid really fast. The bottleneck is now me slowing down this intern who thinks fast by stopping it to redirect it when it does bad things. The more pre prompting and context and verification tools I give it the less I have to do that, so the faster it goes. Then I get to solve the parts of the problem I haven't done until its boring.

I care about solving problems for and delivering value to my users. The software is simply a means to that end. It needs to work well, but that does not mean every line of code requires an artisanal touch and high attention to detail.

I think there's some ambiguity in the discussion around what people mean when they say "good code".

Good code for a business is robust code, that's functionally correct, efficient where it needs to be and does not cost too much.

I believe most developers who care about good code are trying to articulate this, they care about a strong system that delivers well, which comes from good architecture.

LLMs actually deliver pretty well on the more trivial code cleanlines stuff, or can be made to pretty trivially with linters, so I don't think devs working with it should be worried about that aspect.

What is changing fast is that last point I mentioned, "that doesn't cost too much" because if you can get 70% of the requirements for 10% of the perceived up front cost, that calculus has changed. But you are not going to be getting the same level of system architecture for that time/cost ratio. That can bite you later, as it does often enough with human coders too.

But the trick is that if / when you can define "good code" in a deterministic manner, then the LLM can also deliver "good code".

But if it's just based on feels, then of course it can't do it because it's not a mind reading machine.

I think the other aspect to this which you allude to at the end is that all of these arguments start with the assumption that all human software engineers produce high quality code that meets the requirements, but obviously that’s very much not the case in the real world. After all, 80-90% of drivers rate themselves as above average.

If one compares a single competent software engineer directing a number of agents against a random group of engineers (not necessarily working at FAANG or a YC startup), then those quality arguments are going to be significantly less compelling.

Why exactly does "actual nerds who care" stipulate writing code?