There is some bizarre facility with hindley-milner based languages embedded in LLMs, they're basically automatically good at even very new ones like gleam and nanolang. I have a never-released-anywhere hobby ML that compiles to lua and coding models can write it fine. Better than it writes python or php for sure and those have huge corpuses in the training data.
I don't even have good conjecture about why this is the case but right now all my assisted coding is in MLs for this reason.