Sure, in the days of Markov chains you could already generate nonsense in the style of Shakespeare, so it shouldn't be surprising you could also do the inverse.

But the LLM will trigger on a typo you've made only once, and argue "that's a typical mistake for an Italian" and use those clues. It has a much better prior to make informed decisions.

I'm not convinced, though neither am I an expert. I think LLMs would use that same typo to "conclude" that it is A or B or C, depending on what it "feels like proving" at the time.

LLMs are surely excellent at style transfers, but I doubt they can reliably attribute a given style to less well-known authors.