From Anthropic communication:

> If you’re an existing user, you have until September 28, 2025 to accept the updated Consumer Terms and make your decision. If you choose to accept the new policies now, they will go into effect immediately. These updates will apply only to new or resumed chats and coding sessions. After September 28, you’ll need to make your selection on the model training setting in order to continue using Claude. You can change your choice in your Privacy Settings at any time.

Doesn’t say clearly it applies to all the prompts from the past.

https://www.anthropic.com/news/updates-to-our-consumer-terms

Under the FAQ:

> Previous chats with no additional activity will not be used for model training.

That will be quietly removed later.

All your data are belong to us.

* Randomly load up previous chat as the default and just wait for bing pot. * "Tiny oopsie doopsie, our bad."

that "with no additional activity" seems backdoor-ish though. If I said some things in a chat expecting privacy per the agreement, then they change the agreement, does that mean they can collect my data from that chat going forward or does it mean they can collect it retroactively?

Well you have to think about LLMs actually work. There is no "going forward". Every new token is generated based on the entire context window (chat history).

then you can retroactively implicitly opt in for processing, and that's a dark pattern if I've ever heard of one

they can ... generate activity :)