LLMs tend to imitate that practice, e.g. Gemini seems to be doing that by default in its translations unless you stop it. The result is pretty poor though - it makes trivial things overly verbose and rarely gets the deeper cultural context. The knowledge is clearly here, if you ask it explicitly it does it much better, but the generalization ability is still nowhere near the required level, so it struggles to connect the dots on its own.

I was going to say that I'm not certain the knowledge is there and tried an experiment: If you give it a random bible passage in Greek, can it produce decent critical commentary of it? That's something it's certainly ingested mounds of literature on, both decent and terrible.

Checked with a few notable passages like the household codes and yeah, it does a decent (albeit superficial) job. That's pretty neat.