> This belligerent take is so very human, though. We just don't know how an alien intelligence would reason or what it wants. It could equally well be pacifist in nature, whereas we typically conquer and destroy anything we come into contact with. Extrapolating from that that an AGI would try to do the same isn't a reasonable conclusion, though.

Given the general human condition, and that the Pentagon has recently announced that it will make use of an LLM which described itself as "Mecha Hitler", are we likely to create a pacifist AI, or a warmongering AI?

Even without that specific example, all machine learning follows some path in the high-dimensional space of possibilities according to some target function ("loss function", "reward function") that we humans define; this target function is itself an approximation of what the humans who make the AI want (see all buggy software ever, all legal loopholes, the cobra effect and Goodhart's law, everyone who plays games with a min-maxing strategy and finds game-breaking strategies as a result), and what the AI ends up with is an approximation of that target function.

Any given AI is an approximation of the target function, which is an approximation of the creator's goals. But the creators are companies and nations, and the goals of those are often (not always, but the exceptions don't matter when the bad case is even so much as occasional) to grow and to dominate, and even companies will campaign for changes to laws to promote their narrow self interests over those of the people (cigarettes, pollution, workplace safety), while governments have been known to go to war even with supposed allies.