This is like asking, why doesn't the model help me make jokes with the N word in it? It's a product of a business in a society. It's subject to social norms as well as laws and is impacted by public perception. Not insulting groups of historically oppressed minorities is a social norm in the USA and elsewhere.

One of the ways this makes its way into the model is the training data. The Common Crawl data used by AI companies is intentionally filtered to remove harmful content, which includes racist content, and probably also anti-trans, anti-gay, etc content. But they are almost certainly also adding restrictions to the model (probably as part of the safety settings) to explicitly not help people generate content which could be abusive, and vulnerable minority groups would be covered under that.

Unconscious bias is a separate issue. Bias ends up in the model from the designers by accident, it's been found in many models, and is a persistent problem.