Yeah, ultimately the LLM is guess_what_could_come_next(document) in a loop with some I/O either doing something with the latest guess or else appending more content to the document from elsewhere.
Any distinctions inside the document involve the land of statistical patterns and weights, rather than hard auditable logic.