So a prediction machine chose a particular predicted path, and then came up with phrases to ameliorate it and you're swooning? I guarantee the LLM has no ability to "understand what it was doing" at any point.