I'm not sure what to think of this, on first impression I don't like it, but maybe I have a misguided impression on how this works.
Does ChatGPT properly handle western social customs? I'd say yes, and I presume that's because it has a truckload of data involving such customs and even some that explicitly talks about those customs. People do stuff, it gets recorded, then into the LLM.
In this case though we are talking about "artificially" generating content such that the LLM responds how the group making the content wants. Maybe that's something that was already done and I don't really have any ground to stand on?
I don't know. If I ask an LLM something, I just want the facts. I don't care about social norms. I want the truth lol. I don't need it to take in my social wellbeing. That said I would never ask an LLM any therapeutic questions. I would ask it "give me a list of therapists who specialize in marriage counseling in the Chicago area" . I think trying to make LLMs socially conscious is a really really bad idea. It's going to really have bad bad effects on young people still figuring out people, and the very elderly who aren't quite aware of how it works or its fallacies, and that is likely to be universal across all cultures. Probably dumb people too, who think the machine is actually an emotional being.
Llm models are not just fed data, they are trained and fine tuned well beyond the original data, not only by pruning the data, but by a process we call fine tuning, which can produce as arbitrary a result you want It has been done for western markets (openai is a western company), but clearly not for persian culture. It's quite possible that the persian version is mostly a westerner type of dialogue but with a translator in the middle.
The only weird and entitled part is that they ask that other's llms learn taroof? Why not just train your own and teach it to do whatever you like.
As a scandinavian ChatGPT doesn't fit my social customs. Feels like writing with an American.
It doesn't do normal what I would consider human politeness. It's all that really slimy salesman stuff. Even when you tell it to be more normal, casual or you give it an example (or to not dress it up at all) it ends up sounding like it thinks it's a few social classes above you.
Probably due to the prompt to 'help' the user. Besides being helpful you need to be superior to help with knowledge
'help' is a highly loaded word.
> it has a truckload of data
It would be "artificial" only if LLMs performed badly despite having an equal amount of data containing examples of eastern customs in its training set. Even that's arguable since we don't (didn't) have the benchmarks for this particular case before.