So much this. I for one haven't opted out. I feel it's in our best interest to have better models. It would be ideal to be able to opt in/out per thread, but I don't expect most users to pay attention / be bothered with that.
In this aspect, it would've been great to give us an incentive – a discount, a donation on our behalf, plant a percent of a tree or just beg / ask nicely, explain what's in it for us.
Regarding privacy, our conversations are saved anyway, so if it would be a breach this wouldn't make much of a difference, would it?
Agreed. I'm happy they're training on my data.
My reasoning: I use AI for development work (Claude Code), and better models = fewer wasted tokens = less compute = less environmental impact. This isn't a privacy issue for work context.
I regularly run concurrent AI tasks for planning, coding, testing - easily hundreds of requests per session. If training on that interaction data helps future models be more efficient and accurate, everyone wins.
The real problem isn't privacy invasion - it's AI velocity dumping cognitive tax on human reviewers. I'd rather have models that learned from real usage patterns and got better at being precise on the first try, instead of confidently verbose slop that wastes reviewer time.