Why do we care about this? Chess AI have long been solved problems and LLMs are just an overly brute forced approach. They will never become very efficient chess players.

The correct solution is to have a conventional chess AI as a tool and use the LLM as a front end for humanized output. A software engineer who proposes just doing it all via raw LLM should be fired.

It's a proxy for generalized reasoning.

The point isn't that LLMs are the best AI architecture for chess.

Why? Beating chess is more about searching a probability space, not reasoning.

Reasoning would be more like the car wash question.

It's not entirely clear how LLMs that can play chess do so, but it is clearly very different from the way other machines do so. The construct a board, they can estimate a players skill and adjust accordingly, and unlike other machines and similarly to humans, they are sensitive to how a certain position came to be when predicting the next move.

Regardless, there's plenty of reasoning in chess.

It’s very clear how, chess moves and positions are vector encoded into their training data, when they are prompted with a certain board state, they respond with the most probable response to that. There is no reason.

Actual Researchers can't give you a complete answer but you can. Whatever you say.

> It's a proxy for generalized reasoning.

And so for I am only convinced that they have only succeeded on appearing to have generalized reasoning. That is, when an LLM plays chess they are performing Searle’s Chinese room thought experiment while claiming to pass the Turing test