LLM anything makes me queasy. Why would any self respecting software developer use this tripe? Learn how to write good software. Become an expert in the trade. AI anything will only dig a hole for software to die in. Cheapens the product, butchers the process and absolutely decimates any hope for skill development for future junior developers.
I'll just keep chugging along, with debian, python and vim, as I always have. No LLM, no LSP, heck not even autocompletion. But damn proud of every hand crafted, easy to maintain and fully understood line of code I'll write.
I use it all the time, and it has accelerated my output massively.
Now, I don't trust the output - I review everything, and it often goes wrong. You have to know how to use it. But I would never go back. Often it comes up with more elegant solutions than I would have. And when you're working with a new platform, or some unfamiliar library that it already knows, it's an absolute godsend.
I'm also damn proud of my own hand-crafted code, but to avoid LLMs out of principal? That's just luddite.
20+ years of experience across game dev, mobile and web apps, in case you feel it relevant.
I have a hard time being sold on “yea it’s wrong a lot, also you have to spend more time than you already do on code review.”
Getting to sit down and write the code is the most enjoyable part of the job, why would I deprive myself of that? By the time the problem has been defined well enough to explain it to an LLM sitting down and writing the code is typically very simple.
You're giving the game away when you talk about the joy LLMs are robbing from you. I think we all intuit why people don't like the idea of big parts of their jobs being automated away! But that's not an argument on the merits. Our entire field is premised on automating people's jobs away, so it's always a little rich to hear programmers kvetching about it being done to them.
I naively bought into the idea of a future where the computers do the stuff we’re bad at and we get to focus on the cool human stuff we enjoy. If these LLMs were truly incredible at doing my job I’d pack it up and find something else to do, but for now I’m wholly unimpressed, despite what management seems to see in it.
Well, I've spent my entire career writing software, starting in C in the 1990s, and what I'm seeing on my dev laptop is basically science fiction as far as I'm concerned.
Hey both things can be true. It’s a long ways from the AI renaissances of the past. There’s areas LLMs make a lot of sense. I just don’t find them to be great pair programming partners yet.
I think people are kind of kidding themselves here. For Go and Python, two extraordinarily common languages in production software, it would be weird for me at this point not to start with LLM output. Actually building an entire application, soup-to-nuts, vibe-code style? No, I wouldn't do that. But having the LLM writing as much as 80% of the code, under close supervision, with a careful series of prompts (like, "ok now add otel spans to all the functions that take unpredictable amounts of time")? Sure.
Don't get me started on testcase generation.
I'm glad that works for you. Ultimately I think different people will prefer different ways of working. Often when I'm starting a new project I have lots of boilerplate from previous ones I can bootstrap off of. If it's a new tool I'm unfamiliar with I prefer to stumble through it, otherwise I never fully get my head around it. This tends to not look like insane levels of productivity, but I've always found in the long run time spent scratching my head or writing awkward code over and over again (Rust did this to me a lot in the early days) ends up paying off huge dividends in the long run, especially when it's code I'm on the hook for.
What I've found frustrating about the narrative around these tools; I've watched them from afar with intrigue but ultimately found that method of working just isn't for me. Over the years I've trialed more tools than I can remember and adopted the ones I found useful, while casting aside ones that aren't a great fit. Sometimes I find myself wandering back to them once they're fully baked. Maybe that will be the case here, but is it not valid to say "eh...this isn't it for me"? Am I kidding myself?
In my last microservice I took over tests were written by juniors and med. devs using cursor and it was a big blob of generated crap that pass the test, pass the coverage % and are absolute useless garbage
If you're not a good developer, LLMs aren't going to make you one, at least not yet.
If you merge a ball of generated crap into `main`, I don't so much have to wonder if you would have done a better job by hand.
I love the way you described it :)
We get to do cool stuff still, by instructing the LLM how to build such cool stuff.
The parts worth thinking about you still think about. The parts that you’ve done a million times before you delegate so you can spend better and greater effort on the parts worth thinking about.
> The parts that you’ve done a million times before you delegate
That's where I'm confused. I've been coding for more than 20 years, and every task I ever did was different from the other ones. What kind of task do you do a million times before realizing that you should script it in bash or Python?
This is where the disconnect is for me; mundane code can sometimes be nefarious, and I find the mental space I'm in when writing it is very different than reviewing, especially if my mind is elsewhere. The best analogy I can use is a self-driving car, where there's a chance at any point it could make an unpredictable and potentially fatal move. You as the driver cannot trust it but are not actively engaged in the act of driving and have a much higher likelihood of being complacent.
Code review is difficult to get right, especially if the goal is judging correctness. Maybe this is a personal failing, but I find being actively engaged to be a critical part of the process; the more time I spend with the code I'm maintaining (and usually on call for!) the better understanding I have. Tedium can sometimes be a great signal for an abstraction!
The parts I've done a million times before take up... maybe 5% of my day? Even if an LLM replaced 100% of this work my productivity is increased by the same amount as taking a slightly shorter lunch.
I'm confused when people say that LLMs take away the fun or creativity of programming. LLMs are only really good at the tedious parts.
First of all, it’s not tedious for a lot of us. Writing characters themselves is not a lot of time. Secondly, we don’t work in a waterfall model, even on the lowest levels, so the code quantity in an iteration is almost always abysmal or small. Many-many times it’s less than articulate it in English. Thirdly, if you need a wireframe for your code, or a first draft version, you can almost always copy-paste or generate them.
I can imagine that LLM is really helpful in some cases for some people. But so far, I couldn’t find a single example when I and simple copy-pasting wouldn’t have been faster. Not even when I tried it, not when others showed me how to use it.
Because the tedious parts was done long ago while learning the tech. For any platform/library/framework you've been using for a while, you have some old projects laying around that you can extract the scaffolding from. And for new $THING you're learning, you have to take the slow approach anyway to get its semantic.
For me it's typically wrong not in a fundamental way but a trivial way like bad import paths or function calls, like if I forgot to give it relevant context.
And yet the time it takes me to use the LLM and correct its output is usually faster than not using it at all.
Over time I've developed a good sense for what tasks it succeeds at (or is only trivially wrong) and what tasks it's just not up for.
>> I use it all the time, and it has accelerated my output massively.
Like how McDonalds makes a lot of burgers fast and they are very successful so that's all we really care about?
Terrible analogy. I don't commit jank. If the LLM comes out with nonsense, I'll fix it first.
> "and it has accelerated my output massively."
The folly of single ended metrics.
> but to avoid LLMs out of principal? That's just luddite.
Do you double check that the LLM hasn't magically recreated someone else's copyrighted code? That's just irresponsible in certain contexts.
> in case you feel it relevant.
Of course it's relevant. If a 19 year old with 1 year of driving experience tries to sell me a car using their personal anecdote as a metric I'd be suspicious. If their only salient point is that "it gets me to where I'm going faster!" I'd be doubly suspicious.
> Do you double check that the LLM hasn't magically recreated someone else's copyrighted code?
I frankly do not care, and I expect LLMs to become such ubiquitous table-stakes that I don't think anyone will really care in the long run.
> and I expect LLMs to become such ubiquitous table-stakes
Unless they develop entirely new technology they're stuck with linear growth of output capability for input costs. This will take a very long time. I expect it to be abandoned in favor of better ideas and computing interfaces. "AI" always seems to bloom right before a major shift in computing device capability and mobility and then gets left behind. I don't see anything special about this iteration.
> that I don't think anyone will really care in the long run.
There are trillions of dollars at stake and access to even the basics of this technology is far from egalitarian or well distributed. Until it is I would expect people who's futures and personal wealth depends on it to care quite a bit. In the meanwhile you might just accelerate yourself into a lawsuit.
That's really a non-issue. Anything copyrightable is non-trivial in length and complexity to the point that an LLM is not going to verbatim output that.
Funny, gemma will do this all day long.
https://docs.oracle.com/javase/tutorial/getStarted/applicati...cat /bin/true and /bin/false if you are on a Solaris etc... as an example too.
Note this paper that will be presented at ICSE in a couple of weeks too.
https://arxiv.org/abs/2408.02487v3
The point being is that this is very much a very real and yet unsolved problem with LLMs right now.
> I frankly do not care
I just heard a thousand expensive IP lawyers sigh orgasmically.
IP lawyers would have a field day if they had access to the code base of any large corporation. Fortunately, they do not.
Add "Without compromising quality then"!
I’m pretty much in the same boat as you, but here’s one place that LLMs helped me:
In python I was scanning 1000’s of files each for thousands of keywords. A naive implementation took around 10 seconds, obviously the largest share of execution time after running instrumentation. A quick ChatGPT led me to Aho-Corasick and String searching algorithms, which I had never used before. Plug in a library and bam, 30x speed up for that part of the code.
I could have asked my knowledgeable friends and coworkers, but not at 11PM on a Saturday.
I could have searched the web and probably found it out.
But the LLM basically auto completed the web, which I appreciate.
This is where education comes in. When we come cross a certain scale, we should know that O(n) comes into play, and study existing literature before trying to naively solve the problem. What would happen if the "AI" and web search didn't return anything? Would you have stuck with your implementation? What if you couldn't find a library with a usable license?
Once I had to look up a research paper to implement a computational geometry algorithm because I couldn't find it any of the typical Web sources. There were also no library to use with a license for our commercial use.
I'm not against use of "AI". But this increasing refusal of those who aspire to work in specialist domains like software development to systematically learn things is not great. That's just compounding on an already diminished capacity to process information skillfully.
In my context, the scale is small. It just passed the threshold where a naive implementation would be just fine.
> What would happen if the "AI" and web search didn't return anything? Would you have stuck with your implementation?
I was fairly certain there must exist some type of algorithm exactly for this purpose. I would have been flabbergasted if I couldn’t find something on the web. But it that failed, I would have asked friends and cracked open the algorithms textbooks.
> I'm not against use of "AI". But this increasing refusal of those who aspire to work in specialist domains like software development to systematically learn things is not great. That's just compounding on an already diminished capacity to process information skillfully.
I understand what you mean, and agree with you. I can also assure you that that is not how I use it.
There is a time and a place for everything. Software development is often about compromise and often it isn’t feasible to work out a solution from foundational principles and a comprehensive understanding of the domain.
Many developers use libraries effectively without knowing every time consideration of O(n) comes into play.
Competently implemented, in the right context, LLMs can be an effective form of abstraction.
Yes! This is how AI should be used. You have a question that’s quite difficult and may not score well on traditional keyword matching. An LLM can use pattern matching to point you in the right direction of well written library based on CS research and/or best practices.
I mean, even in the absence of knowledge of the existence of text searching algorithms (where I'm from we learn that in university) just a simple web search would have gotten you there as well no? Maybe would have taken a few minutes longer though.
Extremely likely, yes. In this case, since it was an unknown unknown at the time, the LLM nicely explaining that this class of algorithms exists was nice, then I could immediately switch to Wikipedia to learn more (and be sure of the underlying information)
I think of LLMs as an autocomplete of the web plus hallucinations. Sometimes it’s faster to use the LLM initially rather than scour through a bunch of sites first.
But do you know every important detail of that library. For example, maybe that lib is not thread safe, or it allocates a lot of memory to speed thing up, or it wont work on ARM CPU because it uses some x86 hackery ASM?
Nope. And I don’t need to. That is the beauty of abstractions and information hiding.
Just read the docs and assume the library works as promised.
To clarify, the LLM did not tell me about the specific library I used. I found it the old fashioned way.
And that's why there is leaky abstraction. it's very hard to abstract everything.
Sounds like a job for silver/ripgrep and possibly stack exchange. Might take another minute to get it rolling but has other benefits like cost and privacy.
> I could have asked my knowledgeable friends and coworkers, but not at 11PM on a Saturday.
Get friends with weirder daily schedules. :-)
I think it's best if we all keep the hours from ~10pm to the morning sacred. Even if we are all up coding, the _reason_ I'm up coding at that hour is because no one is pinging me
I was with you 150% (though Arch, Golang and Zed) until a friend convinced me to give it a proper go and explained more about how to talk to the LLM.
I've had a long-term code project that I've really struggled with, for various reasons. Instead of using my normal approach, which would be to lay out what I think the code should do, and how it should work, I just explained the problem and let the LLM worry about the code.
It got really far. I'm still impressed. Claude worked great, but ran out of free tokens or whatever, and refused to continue (fine, it was the freebie version and you get what you pay for). I picked it up again in Cursor and it got further. One of my conditions for this experiment was to never look at the code, just the output, and only talk to the LLM about what I wanted, not about how I wanted it done. This seemed to work better.
I'm hitting different problems, now, for sure. Getting it to test everything was tricky, and I'm still not convinced it's not just fixing the test instead of the code every time there's a test failure. Peeking at the code, there are several remnants of previous architectural models littering the codebase. Whole directories of unused, uncalled, code that got left behind. I would not ship this as it is.
But... it works, kinda. It's fast, I got a working demo of something 80% near what I wanted in 1/10 of the time it would have taken me to make that manually. And just focusing on the result meant that I didn't go down all the rabbit holes of how to structure the code or which paradigm to use.
I'm hooked now. I want to get better at using this tool, and see the failures as my failures in prompting rather than the LLM's failure to do what I want.
I still don't know how much work would be involved in turning the code into something I could actually ship. Maybe there's a second phase which looks more like conventional development cleaning it all up. I don't know yet. I'll keep experimenting :)
> never look at the code, just the output, and only talk to the LLM about what I wanted
Sir, you have just passed vibe coding exam. Certified Vibe Coder printout is in the making but AI has difficulty finding a printer. /s
Computers don't need AI help to have trouble finding the printer, lol.
> Why would any self respecting software developer use this tripe?
Because I can ship 2x to 5x more code with nearly the same quality.
My employer isn't paying me to be a craftsman. They're paying me to ship things that make them money.
How do you define code quality in this case and what is your stack?
The definition of code quality is irrelevant to my argument as both human and AI written code are held to the same standard by the same measure (however arbitrary that measure is). 100 units of something vs 99 units of something is a 1 unit difference regardless of what the unit is.
By the time the AI is actually writing code, I've already had it do a robust architecture evaluation and review which it documents in a development plan. I review that development plan just like I'd review another engineers dev plan. It's pretty hard for it to write objectively bad code after that step.
Also, my day to day work is in an existing code base. Nearly every feature I build has existing patterns or reference code. LLMs do extremely well when you tell them "Build X feature. [some class] provides a similar implementation. Review that before starting." If I think something needs to be DRY'd up or refactored, I ask it to do that.
> The definition of code quality is irrelevant to my argument
Understand. Nevertheless, human engineers may deliberately choose certain level of quality and accept certain risks (quality of output is not direct measure of professionalism, so the question wasn’t pointed at your skill) — it‘s good that AI is matching your expectations, but it’s important to understand what are they for your projects.
Code that you can understand and fix later, is acceptable quality per my definition.
Either way, LLMs are actually high up the quality spectrum as they generate a very consistent style of code for everyone. Which gives it uniformity, that is good when other developers have to read and troubleshoot code.
> Code that you can understand and fix later, is acceptable quality per my definition.
This definition limits the number of problems you can solve this way. It basically means buildup of the technical debt - good enough for throwaway code, unacceptable for long term strategy (growth killer for scale-ups).
>Either way, LLMs are actually high up the quality spectrum
This is not what I saw, it’s certainly not great. But that may depend on stack.
I'm curious were you in an existing code base or a greenfield project?
I've found LLMs tend to struggle getting a codebase from 0 to 1. They tend to swap between major approaches somewhat arbitrarily.
In an existing code base, it's very easy to ground them in examples and pattern matching.
Greenfield. It’s an interesting question though, if on today‘s project some model will perform better tomorrow because of more reference data. I would expect LLMs to lag behind on latest technology, simply because their reference data has more older examples and may not include latest versions of platforms or frameworks. I have seen LLMs breaking on basic CRUD tasks because of that.
Good employee, you get cookie and 1h extra pto
No, I get to spend 2 hours working with LLMs, and then spend the rest of the day doing whatever I please. Repeat.
You do understand that state of things is metastable, right? If the productivity gains truly are as claimed, then they will become the _expectation_, and you'll be back to working the same amount. Probably more, because management won't understand that having solid abstractions instead of LLM generated slop is worthwhile for scalability and maintainability. Or less, because you'll have been laid off and will need to do something else to make money. We all know where most of the profits will go if any of this stuff pans out.
I wholeheartedly agree. When the tools become actually worth using, I'll use them. Right now they suck, and they slow you down rather than speed you up. I'm hardly a world class developer and I can do far better than these things. Someone who is actually top notch will outclass them even more.
I understand not wanting to use LLMs that with no correctness guarantees that randomly hallucinate, but what's wrong with ordinary LSPs and autocompletion? Those seem like perfectly useful tools.
I had a professor who used `ed` to write his code. He said only bring able to see one line at a time forces you to think more about what you're doing.
Anyways, Cursor generates all my code now.
If you are like me (same vim, python, no LLM, no autocompletion, no syntax highlighting noise), LSP will make you a better developer: it makes navigating the codebase MUCH easier, including stdlib and 3rd party dependencies.
As a result, you don't lose flow and end up reading considerably more code than you would have otherwise.
Actually, I'm kind of cheating because I use https://github.com/davidhalter/jedi-vim for that purpose: allows me to jump to definitions with <leader>d ;) Excellent plugin, and doesn't require an LSP.
Can pretty much guarantee with AI I'm a better software developer than you without. And I still love working on software used by millions of people every day, and take pride in what I do.
> with debian, python and vim
Why are you cheapening the product, butchering the process and decimating any hope for further skill development by using these tools?
Instead of python, you should be using assembly or heck, just binary. Instead of relying on an OS abstraction layer made by someone else, you should write everything from scratch on the bare metal. Don't lower yourself by using a text editor, go hex. Then your code will truly be "hand crafted". You'll have even more reason to be proud.
I am unironically with you. I think people should start to learn from computer architecture and assembly and only then, after demonstrating proper skill, graduate to C, and after demonstrating skill there graduate to managed-memory languages.
I was lucky enough to start my programming journey coding in Assembler on the much, much simpler micro computers we had in my youth. I would not even vaguely know where to start with Assembler on a modern machine. We had three registers and a single contiguous block of addressable memory ffs. Likewise, the things I was taught about computer architecture and the fetch-execute cycle back in the 80's are utterly irrelevant now.
I think if you tried to start people off on the kinds of things we started off on in the 80's, you'd never get past the first lesson. It's all so much more complex that any student would (rightly!) give up before getting anywhere.
Relevant XKCD: https://xkcd.com/378/
Good for you - if that’s what works for you, then keep on keeping on.
Don’t get too hung up on what works for other people. That’s not a good look.
This comment presupposes that AI is only used to write code that the (presumably junior-level) author doesn’t understand.
I’m a self-respecting software developer with 28 years of experience. I would, with some caveats, venture to say I am an expert in the trade.
AI helps me write good code somewhere between 3x and 10x faster.
This whole-cloth shallow dismissal of everything AI as worthless overhyped slop is just as tired and content-free as breathless claims of the limitless power or universal applicability of AI.
sorry for the snark, but missing the forest for the trees here. unless it's just some philosophical idea, use the tools that save you time. if anything it saves you writing boilerplate or making careless errors.
i don't need to "hand write" every line and character in my code and guess what, it's still easy to understand and maintain because it's what would have written anyway. that or you're just bikeshedding minor syntax.
like if you want to be proud of a "hand built" house with hammer and nails be my guest, but don't conflate the two with always being well built.
Why use a high level language like python? Why not assembly? Are you really proud of the slow unoptimized byte code that’s executed instead of perfectly crafting the assembly implementation optimizing for the architecture? /s
Seriously comments like yours assume, that all the rest of us who DO make extensive use of these AI tools and have also been around the block for a while, are idiots.
[flagged]
That’s a pretty mean spirited way to approach this subject.
I think the creators of Redis and Django are very capable and self-respecting software developers.
[flagged]
I know my work and it largely isn't shoddy. I have a keen eye for detail and code quality is incredibly important to me. But yeah, I am lazy and I hate wasting my time. AI has been a huge boon in the amount of time it's saved me.
> And the vast majority of people using them are either too stupid or too lazy to actually review their own output.
I don't know if that's true or not. But I'm not stupid or too lazy to review the code, because I review every line and make sure I understand everything. The same way I do with every line of my own code or every line a colleague writes if it's relevant to what I'm working on.
You're in the wrong place if you want to talk about people, particularly fellow developers, in this way. You're just being toxic.
This is a classic case of inflating your own ego and intelligence and just assuming all devs other than you are inferior.
In reality there is a place and time for "lazy and shoddy code." Writing code is always a trade off between taking some amount of tech debt and getting the job done quickly vs writing great code.
Is it just me or has there been a wave of delusional people on Hacker News completely neglecting new advancements in technology? The two most common technologies I see having this type of discourse are AI coding and containers.
Either everyone here is a low level quantum database 5D graphics pipeline developer with a language from the future that AI hasn't yet learned, or some people are in denial.
I'm primarily an embedded firmware developer. Gas/electric power products. Ada codebase, so it's off the beaten path but nothing academic by any stretch of the imagination. I have a comprehensive reference manual that describes exactly how the language should be working, and don't need an LLM to regurgitate it to me. I have comprehensive hardware and programming manuals for the MCUs I program that describe exactly how the hardware should be working, and don't need and LLM to regurgitate it to me. I actually really specifically don't want the information transformed, it is engineered to be the way it is, and to change its presentation strips it of a lot of its power.
I deal with way too much torque and way too much electrical energy to trust an LLM. Saving a few minutes here and there isn't worth blowing up expensive prototypes or getting hurt over.
Software development is a spectrum and you're basically on the polar opposite end of the one AI is being used for: sloppy web dev.
I would be willing to live and let live for the sake of being practical, if the tolerance for (and even active drive towards) low quality slop didn't keep pushing further and further into places it shouldn't. People that accept it in sloppy web dev will accept it in fairly important line of business software. People that accept it in fairly important line of business software will accept it in IT infrastructure. People that accept it in IT infrastructure will accept it in non-trivial security software. People that accept it in non-trivial security software will accept it in what should be a high-integrity system, at which point real engineers or regulatory bodies hopefully step in to stop the bullshit. When asked, everybody will say they draw the line at security, but the drive towards Worse knows no bounds. It's why we see constant rookie mistakes in every IoT device imaginable.
My actual idealistic position, discounting the practicality, is that it shouldn't be tolerated anywhere. We should be trying to minimize the amount of cheap, born-to-die, plasticy shit in society, not maximize it. Most people going on about "muh feature velocity" are reinventing software that has existed for decades. The next shitty UI refresh for Android or Windows, or bad firmware update for whatever device is being screwed up for me, will leave me just as unhappy as the last. The sprint was indeed completed on time, but the product still sucks.
A guided missile should obviously not miss its target. An airliner should obviously never crash. An ERP system should obviously never screw up accounting, inventory, etc, although many people will tolerate that to an unreasonable degree. But my contention is that a phone or desktop's UI should never fail to function as described. A "smart" speaker should never fail to turn on or be controlled. A child's toy should never fail to work in the circumstances they would play with it.
If it's going to constantly fuck up and leave me unhappy and frustrated, why was it made? Why did I buy it? AI could have brought it to market faster, but for what? Once I noticed this, I did just quit buying/dealing with this junk. I'm an ideologue and maybe even a luddite, but I just don't need that bad juju on my soul. I use and write software that's worth caring about.
The consequences of incorrect code can be severe outside of front-end web development. For front-end web development, if the code is wrong, you see from your browser that your local web app is broken and try to fix it, or ship it anyway if it's a minor UI bug. For critical backend systems, subtle bugs are often discovered in downstream systems by other teams, and can result in financial loss, legal risk, reputational damage, or even loss of life.
It’s totally valid to see a new piece of tech, try it, say it’s not for you, and move on. With LLMs it feels forced-fed, and simply saying “eh I’m good, no thanks” isn’t enough. Lots of hype and headlines on how it’s going to take our jobs and replace us, pressure from management to adopt it.
Some new trends make perfect sense to me and I’ll adopt them. I’ve let some pass me by and rarely regretted it. That doesn’t make me a luddite.
I think it’s just backlash against all the AI hype - I get it, im tired of hearing about it too, but - it’s already here to stay, it’s been that way for years now - it’s a normal part of development now for most people, the same as any new tool that becomes the industry darling. Learn to like it or at least learn it, but the reality is here whether you like it or not.
The gatekeepers are witnessing the gate opening up more and letting more people in and they don't like that at all.
[flagged]
Job market for knowledge jobs isn’t even that good anymore and plenty of people expect it to get worse regardless of their stance on AI. What makes you so sure that LLM users have a bank to laugh all the way to? Already there are many like you, the money you’d make is peanuts
Are you going to the bank to take out a loan? You're claiming you've outcompeted other programmers by using... optimizing compilers?