Oh definitely! :-) I just think it's like an amplification of the same old thing. It makes it easier to play that game and harder to counter it.

Fred now generates 5,000 lines of horse-dung that appears to work and management are gob-smacked. It is extremely fragile, has no security and the tests are all autogenerated so nobody knows if they're even testing what is actually important but...

Above the team-lead level, management, product manager etc have no idea what's inside a piece of work that makes it maintainable or secure or anything else and all they see is their idea realised and the person who did it has a golden halo so you cannot say a single negative thing about the work without a tonne of shit pouring on you.

This has happened to me. It was in the days when ChatGPT was much worse than it is now and the code was almost one big hallucination - indescribable how bad it was. The only advantage I had was that the whole team, other than Fred of course, rejected the PR. It caused a world of horrible problems though and incredible behavior from "Fred" and yet he was able to get away with it until he finally stepped so far over the line that nobody could support him. It caused other team members to leave though so it was a disaster.

Your case sounds pretty extreme, like a combination of multiple toxic factors and I still don't think it is due to AI. Fred still could have pushed bad code without the help of AI and the situation would be the same. In this case the problem is Fred, not AI. It's the human negligence of Fred that caused the issue. Even if you ban using AI in your company, still Fred would behave the same and find a different way to be toxic.