It reads as racist if you parse it as (skin tone and attractiveness) but if you instead parse it as (skin tone) and (attractiveness), ie as two entirely unrelated characteristics of the output, then it reads as nothing more than a claim about relative differences in behavior between models.
Of course, given the sensitivity of the topic it is arguably somewhat inappropriate to make such observations without sufficient effort to clarify the precise meaning.
I find that people who are hypersensitive to racism are usually themselves pretty racist. It's like people who are aroused by something taboo are usually the biggest critic. I forget what this phenomena is called.
Calling out overt racism is not “hypersensitivity” and in what fucking world could it be racism? This mentality is why the tech industry is so screwed up.
You have your head in your ass. Read the text:
> Chinese text2image generate attractive and more light skinned humans. > I think this is another area where Chinese AI models shine.
That is racism. There simply is no other way to classify it.