The first anthropomorphization of AI which is actually useful.

It's not even an anthropomorphization, the reward function in RLHF-like scenarios is usually quite literally "did the user think the output was good"