That's because it's software / an application. I don't blame my editor for broken code either. You can't put blame on software itself, it just does what it's programmed to do.
But also, blameless culture is IMO important in software development. If a bug ends up in production, whose fault is it? The developer that wrote the code? The LLM that generated it? The reviewer that approved it? The product owner that decided a feature should be built? The tester that missed the bug? The engineering organization that has a gap in their CI?
As with the Therac-25 incident, it's never one cause: https://news.ycombinator.com/item?id=45036294
Blameless culture is important for a lot of reasons, but many of them are human. LLMs are just tools. If one of the issues identified in a post-mortem is "using this particular tool is causing us problems", there's not a blameless culture out there that would say "We can't blame the tool..."; the action item is "Figure out how to improve/replace/remove the tool so it no longer contributes to problems."
Blame is purely social and purely human. “Blaming” a tool or process and root causing are functionally identical. Misattributing an outage to a single failure is certainly one way to fail to fix a process. Failing to identify faulty tools/ faulty applications is another way.
I was being flippant to say it’s never AI’s fault, but due to board/C-Suite pressure it’s harder than ever to point out the ways that AI makes processes more complex, harder to reason about, stochastic, and expensive. So we end up with problems that have to be attributed to something not AI.
> You can't put blame on software itself, it just does what it's programmed to do.
This isn't what AI enthusiasts say about AI though, they only bring that up when they get defensive but then go around and say it will totally replace software engineers and is not just a tool.