Okay they're not going to be learning in real time. Its not like you're getting your data stolen and then getting something out of it - you're not. What you're talking about is context.
Data gathered for training still has to be used in training, i.e. a new model that, presumably, takes months to develop and train.
Not to mention your drop-in-the-bucket contribution will have next to no influence in the next model. It won't catch things specific to YOUR workflow, just common stuff across many users.
> Not to mention your drop-in-the-bucket contribution will have next to no influence in the next model. It won't catch things specific to YOUR workflow, just common stuff across many users.
I wonder about this. In the future, if I correct Claude when it makes fundamental mistakes about some topic like an exotic programming language, wouldn't those corrections be very valuable? It seems like it should consider the signal to noise ratio in these cases (where there are few external resources for it to mine) to be quite high and factor that in during its next training cycle.