But you don’t think that, fundamentally, giving software that can hallucinate the ability to use your credit card to buy plane tickets, is a bad idea?
It kind of seems like the only way to make sure a model doesn’t get exploited and empty somebody’s bank account would be “We’re not building that feature at all. Agentic AI stuff is fundamentally incompatible with sensible security policies and practices, so we are not putting it in our software in any way”