Agreed. I'm happy they're training on my data.

My reasoning: I use AI for development work (Claude Code), and better models = fewer wasted tokens = less compute = less environmental impact. This isn't a privacy issue for work context.

I regularly run concurrent AI tasks for planning, coding, testing - easily hundreds of requests per session. If training on that interaction data helps future models be more efficient and accurate, everyone wins.

The real problem isn't privacy invasion - it's AI velocity dumping cognitive tax on human reviewers. I'd rather have models that learned from real usage patterns and got better at being precise on the first try, instead of confidently verbose slop that wastes reviewer time.