> If you are letting your LLM look at StackOverflow, you are doing it wrong
So you've evaluated all the sources that the model was trained on initially have you? How long did that take you?
> I'm shipping quality software and features to my customers at a pace I haven't been able to before.
I'm sorry are you agreeing with me or not? It sounds like you're agreeing with me.
I’m just saying that you can’t just let it rip based on its training alone, it needs to be grounded and harnessed in stack specific tooling.
I'd be more general and say it needs verification to guide it, and narrowed scope so it doesn't wander off. How those get provided can vary. While I can do what I'm asking it to do, and have so many times that I don't want to anymore, I can't do it as fast as it can. But as someone said, it is stupid really fast. The bottleneck is now me slowing down this intern who thinks fast by stopping it to redirect it when it does bad things. The more pre prompting and context and verification tools I give it the less I have to do that, so the faster it goes. Then I get to solve the parts of the problem I haven't done until its boring.