The result of you having worked 4 hours to implement the thing is not just that you have the thing, it's that you have the thing and you understand the thing. Having the thing is next to useless if you don't understand it.
At best it plods along as you keep badgering Claude to fix it, until inevitably Claude reaches a point where it can't help. At which time you'll be forced to spend at least the 4 hours you would have originally spent trying to understand it so you can fix it yourself.
At worst the thing will actively break other things you do understand in ways you don't understand, and you'll have to spend at least 4 hours cleaning up the mess.
Either way it's not clear you've saved any time at all.
Respectfully, I think I’m in a better position to decide a) what value this has to me and b) what I choose to learn vs just letting Opus deal with. You don’t have enough information to say if I’ve saved time because you don’t know what I’m doing or what my goals are.
Respectfully, a) I didn't say anything about what value this has to you but moreover...
b) you also don't have enough information to say if it's saved you time because the costs you will bear are in the future. Systems require maintenance, that's a fact you can't get rid of with AI. And often times, maintaining systems require more work than building them in the first place. Maintaining systems tends to require a deep understanding of how they work and the tradeoffs that were decided when they were built.
But you didn't build the thing, you didn't even design it as you left that up to Claude. That makes the AI the only thing on the planet that understands the system, but we know actually the AI doesn't understand anything at all. So no one understands the system you built, including the AI you used. And you expect that this whole process will have saved you time, while you play games?
I just don't see it working out that way, sorry. The artifact the AI spit out will eventually demand you pay the cost in time to understand it, or you will incur future costs for not understanding it as it fails to act as you expect. You'll pay either way in the end.
> And you expect that this whole process will have saved you time, while you play games?
The topic in question is “Can AI tools do a task that would take a human 4 hours”. Not whether it can do that in a way that leads to maintainability or sustained learning. I’m noodling on a hobby project as leisure time. I got what I wanted. I had fun.
> incur future costs for not understanding it as it fails to act as you expect
That is your stronger argument. I’ve seen quality problems with the search results that come from using a smaller embedding model than I should. I don’t know yet if that’s a blocker or tolerable.
But I think that argument would be wrong too, because I’m very glad I chose Claude. The biggest limitation might be that I don’t have the compute locally to run an embedding model good enough to achieve decent results. It would have been a huge waste of my time to build it by hand and discover that at the end. I’m not about to pay for a sass vector DB or run this in AWS. At that level of effort I’d just scrap it.
You do learn how to control claude code and architect/orient things around getting it to deliver what you want. That's a skill that is both new and possibly going to be part of how we work for a long time (but also overlaps with the work tech leads and managers do).
My proto+sqlite+mesh project recently hit the point where it's too big for Claude to maintain a consistent "mental model" of how eg search and the db schemas are supposed to be structured, kept taking hacky workarounds by going directly to a db at the storage layer instead of the API layer, etc. so I hit an insane amount of churn trying to get it to implement some of the features needed to get it production ready.
Here's the whackamole/insanity documented in git commit history: https://github.com/accretional/collector/compare/main...feat...
But now I know some new tricks and intuition for avoiding this situation going forward. Because I do understand the mental model behind what this is supposed to look like at its core, and I need to maintain some kind of human-friendly guard rails, I'm adding integration tests in a different repo and a README/project "constitution" that claude can't change but is accountable for maintaining, and configuring it to keep them in context while working on my project.
Kind of a microcosm of startups' reluctance to institute employee handbook/kpis/PRDs followed by resignation that they might truly be useful coordination tools.
I agree with this sentiment a lot. I find my experience matches this. It's not necessarily fast at first, but you learn lessons along the way that develop a new set of techniques and ways of approaching the problem that feel fundamental and important to have learnt.
My fun lesson this week was there's not a snowballs chance in hell GitHub Copilot can correctly update a Postman collection. I only realised there was a Postman MCP server after battling through that ordeal and eventually making all the tedious edits myself.
Yeah, this is close to my experience with it as well. The AI spits out some tutorial code and it works, and you think all your problems are solved. Then in working with the thing you start hitting problems you would have figured out if you had built the thing from scratch, so you have to start pulling it apart. Then you start realizing some troubling decisions the AI made and you have to patch them, but to do so you have to understand the architecture of the thing, requiring a deep dive into how it works.
At the end of the day, you've spent just as much time gaining the knowledge, but one way was inductive (building it from scratch) while the other is deductive (letting the AI build it and then tearing it apart). Is one better than the other? I don't know. But I don't think one saves more time than the other. The only way to save time is to allow the thing to work without any understanding of what it does.
> inevitably Claude reaches a point where it can't help.
Perhaps not. If LLMs keep getting better, more competent models can help him stay on top of it lol.
You're still captive to a product. Which means that when CloudCo. increases their monthly GenAI price from $50/mo. to $500/mo., you're losing your service or you're paying. By participating in the build process you're giving yourself a fighting chance.
I will quickly forget the details about any given code base within a few months anyway. Having used AI to build a project at least leaves me with very concise and actionable documentation and, as the prompter, I will have a deep understanding of the high-level vision, requirements and functionality.