LLMs playing chess isn't a big deal. You can train a model on chess games and it will play at a decent ELO and very rarely make illegal moves(i.e 99.8% legal move rate). There are a few such models around. I think post training messes with chess ability and Open ai et al just don't really care about that. But LLMs can play chess just fine.
Jeez, that arxiv paper invalidates my assumption that it can't model the game. Great read. Thank you for sharing.
Insane that the model actually does seem to internalize a representation of the state of the board -- rather than just hitting training data with similar move sequences.
...Makes me wish I could get back into a research lab. Been a while since I've stuck to reading a whole paper out of legitimate interest.
(Edit) At the same time, it's still worth noting the accuracy errors and the potential for illegal moves. That's still enough to prevent LLMs from being applied to problem domains with severe consequences, like banking, security, medicine, law, etc.