you've hit the nail on the head here. AI rollout has this hilarious consequence where "lower" departments have for a long time insultated the c-suite against their worst excesses and worst mistakes. Now that barrier is slowly crumbling due to AI-first, giving the c-suite an incredibly rare opportunity to discover how bad some of its ideas are in practice and there's less opportunity to blame those outcomes on others.
I am pretty certain that if you are in an org where c-suite shifts reasons for negative results to external sources, they will find a way to do the same in the age of AI.
I've always thought of this as the reality grease problem.
We need rules. Yet the infinite variety of reality creates infinite situations in which the rules are counterproductive.
Previously: the ground folks had a brain and bent/ignored certain rules in the interest of getting their job done.
The principle peril of creating a more end-to-end automated, lights-out business is that there is no longer a brain to grease the interface between c-level and reality.
And c-level is never going to admit their own mistakes.
Ergo, you're going to get a lot of command-heavy companies that plow themselves into the ground over the next 10-20 years, because the low-level people they're going to fire were performing an essential function.
(Note: the easiest escape, inasmuch as I can see one, in radically data-driven management, with frequent random shifts between analogous but independent metrics)