For people who like to tick boxes, which is essentially most of the above, AI is welcome. That includes managers.
It still has nothing to do with software engineering. All good code was written by humans. AI took it, plagiarizes it, launders it and repackages it in a bloated form.
Whenever I look deeply at an AI plagiarized mess, it looks like it is 90% there but in reality it is only 50%. Fixing the mess takes longer than writing it oneself.
How can you say it has "nothing to do with software engineering" with a straight face?
I think you might be in serious denial.
Of course writing code isn't the only task of a software engineer, but it's an important one.
There wouldn't be so much controversy if it wasn't the case
"Writing code" as a task of its own is called cowboy coding. It's neat that AI can do this now, but that has nothing to do with proper software engineering which always starts from a careful, human-led design.
Yes and every AI-first development workflow worth its salt does exactly this, and it does it much more thoroughly than I’ve ever seen a team of meatbags do it.
My workflow, at a high level, is:
1. I write a high level spec. Not as high level as a single-sentence prompt, but high level enough to capture my top requirements.
2. I prompt the AI to interview me about the spec to clear up any ambiguity or open questions, then when I’m satisfied, the AI writes a longer spec, which I then review.
3. Then I prompt the AI to write an implementation plan based on the spec. I might just skim this, and by this point I might be asking the LLM more questions than it’s asking me.
4. Now I hand it off to the implementer agent.
This isn’t cowboy coding, it’s not even agile. It’s waterfall. The problem with doing waterfall was that it’s too slow, especially with the deserialization/serialization cost of routing all of this documentation through meatbrains. The LLM is doing just as much work, true, but faster.
The thing I found surprising was that, while LLM’s are still pretty awful at writing as an art form, they are better technical writers than I have the time to be, especially when writing for an audience of other LLM’s.
Is this project in production and for how long? How many users?
"has nothing to do with proper software engineering"
So you're saying software engineers don't write code? Just because there are other things that SWEs do, does not mean it has nothing to do with it.
It's arguably a pretty important part. Would you really hire a software engineer who can't code?
Writing code and copying the output of an LLM is absolutely not the same.
You wouldn't call someone an author that takes LLM outputs and shoves it in a book. IDK why this distinction doesn't apply to devs too.
You call someone an author when they use a ghostwriter. They're giving inputs that are core to the output, even though they aren't doing all the writing. Same thing.
I can assure you a sizable amount of people in the writing community look down on "authors" that only use ghostwriters.
Why do tech workers act shock that people hate this junk being force fed to them that they are now resorting to violence to reject said junk?
You think telling humans with specialized crafts that they don't matter is good politics? Good grief.
Of course.
I'm not surprised at all that devs are upset.
>You think telling humans with specialized crafts that they don't matter is good politics? Good grief.
Yeah, of course not. There are lot's of historical examples of this. That being said, those historical examples don't play out well for the craftsmen, either.
Look, I'm a SWE myself. I see my job drastically changing right in front of my eyes. I know there's nuance to it, too, that's hard to articulate in these comment threads.
But I think a lot of people here are biased against thinking that they are irreplaceable - I've definitely been in that camp. I don't think that it's wise, however.
Or even more appropriate: a movie director is almost never on-screen but the actors aren't the ones determining the shots to use or writing the script.
>You call someone an author when they use a ghostwriter.
i don't know about you, but i absolutely don't. either you write the book yourself or you are not the author.
as kendrick lamar wrote:
I can dig rappin', but a rapper with a ghostwriter?
What the fuck happened? (Oh no)
What's a good example of human-led design?
The hard part of software engineering is turning a vague problem description into a set of box-ticking exercises. If ticking boxes became genuinely easier, the software engineering part is now a lot more valuable.
You’re reminding me a lot of those old assembly hackers who thought compilers were bullshit because they could hand-write better assembly. And I don’t mean that as an insult; those guys were probably right about their assembly code, just like an Amish craftsman will make better furniture than a factory in China. The problem is that the world needs more furniture and more software than skilled craftsmen can produce, and the skill gap between the craftsman and the mass production process is diminishing fast.
We’re still going to have handwritten software, just like we still have handwritten assembly. It just won’t be the norm.
No fixing the mess definitely does not take longer than writing it oneself.
Your linter should identify all issues - including architectural and stylistic choices - and the AI agents will immediately repair them.
It's about 1000x faster than a human code at repairing its own mess.
> Your linter should identify all issues - including architectural
If a linter could deterministically identify bad architecture, you wouldn't need an LLM, your linters could just write your code for you. The vibe coding takes are just getting more and more empty-headed...
Your custom linters don't check architectural design?
linters statically check code and provide deterministic recommendations. LLMs are used to make judgement. I specifically write my linters for my project to make recommendations for LLMs.
This is how you save on token usage, so your LLMS aren't wasting tokens on static analysis that a linter could do for free.
That's at least how I make my linters.
> If a linter could deterministically identify bad architecture, you wouldn't need an LLM,
a) that's not what a linter is built for, its a tool with very specific role
b) You must've never seen LLM expose secrets in plain text or use the most convoluted scenarios you can think of.
I think you missed the point of the person you are replying to.