I don't know why anyone is still using this - much less defending them..
In the last MONTH, I've asked how you can defend implementing (or even choose implementing) AI when:
the AI you have implemented throughout your company changes the results you've come to trust? https://www.theregister.com/2026/04/13/claude_outage_quality...
or won't let you log in?: https://github.com/anthropics/claude-code/issues/44257
or makes stuff up?: https://dwyer.co.za/static/claude-mixes-up-who-said-what-and...
or when it's down?: https://status.claude.com/incidents/6jd2m42f8mld
or when you get banned?: https://bannedbyanthropic.com/
or installs spyware: https://www.thatprivacyguy.com/blog/anthropic-spyware/
or takes the features you use out of the plan you subscribe to without notice? https://www.theregister.com/2026/04/22/anthropic_removes_cla...
or renders your IP legally unenforceable? https://legallayer.substack.com/p/who-owns-the-claude-code-w...
or stealthily changes pricing terms based on... file names you have? https://github.com/anthropics/claude-code/issues/53262
or invoices you for usage you did not perform, and won't answer support requests until you raise hell on social media? https://nickvecchioni.github.io/thoughts/2026/04/08/anthropi...
i mean seriously, why on earth would you use this? i thought we were professionals
The problem in most of those cases is not specifically AI. Many of the issues you cited are related to Anthropic specifically and many could have been avoided with better testing.
Yes, I am assuming the AI/LLM of choice you've implemented in your software engineering org is Claude because as far as I can tell there aren't really alternatives that come close to its quality in software.