Is there a legal distinction between training, post-training, fine tuning and filling up a context window?
In all of these cases an AI model is taking a copyrighted source, reading it, jumbling the bytes and storing it in its memory as vectors.
Later a query reads these vectors and outputs them in a form which may or may not be similar to the original.
Judges have previously ruled that training counts as sufficiently transformative to qualify for fair use: https://www.whitecase.com/insight-alert/two-california-distr...
I don't know of any rulings on the context window, but it's certainly possible judges would rule that would not qualify as transformative.