> The problem arrises when Bob encounters a problem too complex or unique for agents to solve.
It’s actually worse than that: the AI will not stop and say ”too complex, try in a month with the next SOTA model”. Rather, it will give Bob a plausible looking solution that Bob cannot identify as right or wrong. If Bob is working on an instant feedback problem, it’s ok: he can flag it, try again, ask for help. But if the error can’t be detected immediately, it can come back with a vengeance in a year. Perhaps Bob has already gotten promoted by then, and Bobs replacement gets to deal with it. In either case, Bob cannot be trusted any more than the LLM itself.
How is that Bob's problem?
When he said we need more time to do this properly, he was labelled slow. They pushed him to use AI all day long and told at the all hands that there will be programmers who use AI and those who don't will be left behind. So he said fuck doing it right for the project, let me do it right for myself.
Now he got his promotion, they will hire 3 people in a cheaper location to handle various issues that are coming up (product will always have bugs you see). Given his excellent speed of delivery, they will report to him.
Good for Bob.
> How is that Bob's problem?
It isn’t. Bob has a different problem: that there are millions of Bobs with access to the same tools, meaning the value of Bobs labor is commodity priced. That may be good for some Bobs and bad for others.