> The goal is to give the answer that would have come from the training data if that question were asked.
Or more cynically, the goal is to give you the answer that makes you use the product more. Finetuning is really diverging the model from whats in the training set and towards what users "prefer".