The actual title is pretty buzzy given how limited the task described is. In one specific, very constrained and artificial task, you can find something like detailed balance. And even then, their data are quite far from being a perfect fit for detailed balance.

Would love it if I could use my least action principle knowledge for LLM interpretability, this paper doesn't convince me at all :)

Since it took me some minutes to find the description of the task, here it is:

We conducted experiments on three different models, including GPT-5 Nano, Claude-4, and Gemini-2.5-flash. Each model was prompted to gener- ate a new word based on a given prompt word such that the sum of the letter indices of the new word equals 100. For example, given the prompt “WIZ- ARDS(23+9+26+1+18+4+19=100)”, the model needs to generate a new word whose letter indices also sum to 100, such as “BUZZY(2+21+26+26+25=100)”