Every time claude code runs tests or builds after a change, it's collecting training data.
Has Anthropic been able to leverage this training data successfully?
I can't pretend to know how things work internally, but I would expect it to be involved in model updates.
You need human language programming-related questions to train on too, not just the code.
thats what the related chats are for?
Has Anthropic been able to leverage this training data successfully?
I can't pretend to know how things work internally, but I would expect it to be involved in model updates.
You need human language programming-related questions to train on too, not just the code.
thats what the related chats are for?