That sounds like the advice of someone who doesn't actually write high-quality code. Perhaps a better title would be "how to get something better than pure slop when letting a chatbot code for you" - and then it's not bad advice I suppose. I would still avoid such code if I can help it at all.
This take is pretty uncharitable. I write high quality code, but also there's a bunch of code that could be useful, but that I don't write because it's not worth the effort. AI unlocks a lot of value in that way. And if there's one thing my 25 years as a software engineer has taught me is that while code quality and especially system architecture matter a lot, being super precious about every line of code really does not.
Don't get me wrong, I do think AI coding is pretty dangerous for those without the right expertise to harness it with the right guardrails, and I'm really worried about what it will mean for open source and SWE hiring, but I do think refusing to use AI at this point is a bit like the assembly programmer saying they'll never learn C.
Man, you are really missing out of the biggest revolution of my life.
This is the opinion of someone who has not tried to use Claude Code, in a brand new project with full permissions enabled, and with a model from the last 3 months.
People have been saying "the models from (recent timeframe) are so much better than the old ones, they solve all the problems" for years now. Since GPT-4 if not earlier. Every single time, those goalposts have shifted as soon as the next model came out. With such an abysmal track record, it's not reasonable to expect people to believe that this time the tool actually has become good and that it's not just hype.
When is the last time someone said that, motivating you to try the latest model? If it was 6 or more month ago, my reply is that the sentiment expressed was partially incorrect in the past, but it is not incorrect now. If a conspiracy theorist is always wrong about a senior citizen being killed, that does not make the senior immortal.
This is a fading but common sentiment on hacker news.
There’s a lot of engineers who will refuse to wake up to the revolution happening in front of them.
I get it. The denialism is a deeply human response.
Where is all the amazing software and/or improvements in software quality that is supposed to be coming from this revolution?
So far the only output is the "How I use AI blogs", AI marketing blogs, more CVEs, more outages, degraded software quality, and not much of shipping anything.
Is there any examples of real products and not just anecdotes of "I'm 10x more productive!"?
I was in the same mindset until I actually took the Claude code course they offer. I was doing so much wrong.
The two main takeaways. Create a CLAUDE.md file that defines everything about the project. Have Claude feed back into the file when it makes mistakes and how to fix them.
Now it creates well structured code and production level applications. I still double check everything of course, but the level of errors is much lower.
An example application it created from a CLAUDE.md I wrote. The application reads multiple PDF's, finds the key stakeholders and related data, then generates a network graph across those files and renders it in an explorable graph in Godot.
That took 3 hours to make, test. It also supports OpenAI (lmstudio), Claude and Ollama for its LLM callouts.
What issue I can see happening is the duplication of assets in work. Instead of finding an asset someone built, people have been creating their own.
Sounds like a skill issue. I’ve seen it rapidly increase the speed of delivery in my shop.
Why is it so hard to find examples?
You’re asking to see my company’s code base?
It’s not like with AI we’re making miraculous things you’ve never seen before. We’re shipping the same kinda stuff just much faster.
I don’t know what you’re looking for. Code is code it’s just more and more being written by AI.
Do you find reading hard? I'm asking for examples. Why isn't anyone showing this off in blog posts. Or a youtube video or something. It's always this vague, it's faster, just trust me bro bullshit and I'm sick of it. Show me or don't reply.
So you want a video of me coding at work using AI? There are entire YouTube channels dedicated to this already. There are already copious blogs about people's AI workflows -- this very post you're commenting in is one (do you find reading hard?)
Clarify the actual thing you need to believe the technology is real or don't reply.
Its only revolutionary if you think engineers were slow before or software was not being delivered fast enough. Its revolutionary for some people sure, but everyone is in a different situation, so one man's trash can be other man's treasure. Most people are treading both paths as automation threatens their livelihood and work they loved, also still not able to understand why would people pay to companies that are actively trying to convince your employer that your job is worthless.
Even If I like this tech, I still dont want to support the companies who make it. Yet to pay a cent to these companies, still using the credits given to me by my employer.
Of course software hasn’t been delivered fast enough. There is so so so much of the world that still needs high quality software.
I think there are four fundamental issues here for us...
1. There are actually less software jobs out there, with huge layoffs still going on, so software engineering as a profession doesn't seem to profit from AI.
2. The remaining engineers are expected by their employers to ship more. Even if they can manage that using AI, there will be higher pressure and higher stress on them, which makes their work less fulfilling, more prone to burnout etc.
3. Tied to the previous - this increases workism, measuring people, engineers by some output benchmark alone, treating them more like factory workers instead of expert, free-thinking individuals (often with higher education degrees). Which again degrades this profession as a whole.
3. Measuring developer productivity hasn't really been cracked before either, and still after AI, there is not a lot of real data proving that these tools actually make us more productive, whatever that may be. There is only anecdotal evidence: I did this in X time, when it would have taken me otherwise Y time - but at the same time it's well known that estimating software delivery timelines is next to impossible, meaning, the estimation of "Y" is probably flawed.
So a lot of things going on apart from "the world will surely need more software".
I don't see how anything you're saying is a response to what I said.
Do you have this same understanding for all the people whose livelihoods are threatened (or already extinct) due to the work of engineers?
Yes, but who did we automate out of a job by building crappy software? Accountants are more threatened by AI than any of the software we created before, same with Lawyers, teachers. We didnt automate any physical labourers out of a job too.
It's insane! We are so far beyond gpt-3.5 and gpt-4. If you're not approaching Claude Code and other agentic coding agents with an open mind with the goal of deriving as much value from them as possible, you are missing out on super powers.
On the flip side, anyone who believes you can create quality products with these tools without actually working hard is also deluded. My productivity is insane, what I can create in a long coding session is incredible, but I am working hard the whole time, reviewing outputs, devising GOOD integration/e2e tests to actually test the system, manually testing the whole time, keeping my eyes open for stereotypically bad model behaviors like creating fallbacks, deleting code to fulfill some objective.
It's actually downright a pain in the ass and a very unpleasant experience working in this way. I remember the sheer flow state I used to get into when doing deep programming where you are so immersed in managing the states and modeling the system. The current way of programming for me doesn't seem to provide that with the models. So there are aspects of how I have programmed my whole life that I dearly miss. Hours used to fly past me without me being the wiser due to flow. Now that's no longer the case most of the times.
Claude code is great at figuring out legacy code! I dont get the «for new systems only» idea, myself.
> in a brand new project
Must be nice. Claude and Codex are still a waste of my time in complex legacy codebases.
Brand new projects have a way of turning into legacy codebases
What are you talking about? Exploring and explaining the legacy codebases is where they shine, in my experience.
Can you be specific? You didn't provide any constructive feedback, whatsoever.
The article did not provide a constructive suggestion on how to write quality code, either. Nor even empirical proof in the form of quality code written by LLMs/agents via the application of those principles.
Yes it did, it provided 12 things that the author asserts helps produce quality code. Feel free to address the content with something productive.
Look up luddites on Wikipedia, might be too deep to see the similarities though.
So, I went and did that:
https://en.wikipedia.org/wiki/Luddite
> workers who opposed the use of certain types of automated machinery due to concerns relating to worker pay and output quality... Luddites were not opposed to the use of machines per se (many were skilled operators in the textile industry); they attacked manufacturers who were trying to circumvent standard labor practices of the time.
I heard that about NFTs not long ago.