While you are dying on a hill, with the help of LLMs, I'm shipping quality software and features to my customers at a pace I haven't been able to before. And no, not some nextjs slop. If you are letting your LLM look at StackOverflow, you are doing it wrong - it needs to be grounding in your stacks official docs and any other style/rules you prefer wired with other tooling like linting/formatting, duplication checking, etc. And yes, you have to constantly monitor the output and review every line of code - but it's still faster and if managed correctly, produces better code and (this is the hill I will die on) better test suites and documentation than I would have written.
> If you are letting your LLM look at StackOverflow, you are doing it wrong
So you've evaluated all the sources that the model was trained on initially have you? How long did that take you?
> I'm shipping quality software and features to my customers at a pace I haven't been able to before.
I'm sorry are you agreeing with me or not? It sounds like you're agreeing with me.
I’m just saying that you can’t just let it rip based on its training alone, it needs to be grounded and harnessed in stack specific tooling.
I'd be more general and say it needs verification to guide it, and narrowed scope so it doesn't wander off. How those get provided can vary. While I can do what I'm asking it to do, and have so many times that I don't want to anymore, I can't do it as fast as it can. But as someone said, it is stupid really fast. The bottleneck is now me slowing down this intern who thinks fast by stopping it to redirect it when it does bad things. The more pre prompting and context and verification tools I give it the less I have to do that, so the faster it goes. Then I get to solve the parts of the problem I haven't done until its boring.