Agreed.
I've found some AI assistance to be tremendously helpful (Claude Code, Gemini Deep Research) but there needs to be a human in the loop. Even in a professional setting where you can hold people accountable, this pops up.
If you're using AI, you need to be that human, because as soon as you create a PR / hackerone report, it should stop being the AI's PR/report, it should be yours. That means the responsibility for parsing and validating it is on you.
I've seen some people (particularly juniors) just act as a conduit between the AI and whoever is next in the chain. It's up to more senior people like me to push back hard on that kind of behaviour. AI-assisted whatever is fine, but your role is to take ownership of the code/PR/report before you send it to me.
> If you're using AI, you need to be that human, because as soon as you create a PR / hackerone report, it should stop being the AI's PR/report, it should be yours. That means the responsibility for parsing and validating it is on you.
And then add to that the pressure to majorly increase velocity and productivity with LLMs, that becomes less practical. Humans get squeezed and reduced to being fall guys for when the LLM screws up.
Also, Humans are just not suited to be the monitoring/sanity check layer for automation. It doesn't work for self-driving cars (because no one has that level of vigilance for passive monitoring), and it doesn't work well for many other kinds of output like code (because often it's a lot harder to reverse-engineer understanding from a review than to do it yourself).
>but there needs to be a human in the loop.
More than that - there needs to be a competent human in the loop.
We've going from being writers to editors: a particular human must still ultimately be responsible for signing off on their work, regardless of how it was put together.
This is also why you don't have your devs do QA. Someone has to be responsible for, and focused specifically on quality; otherwise responsibility will be dissolved among pointing fingers.