> I've seen people claiming that they have a bunch of paid accounts that they use to fetch the pages, which is, of course, ridiculous.
The curious part is that they allow web scraping arbitrary pages on demand. So if a publisher could put in a lot of arbitrary requests to archive their own pages and see them all coming from a single account or small subset of accounts.
I hope they haven't been stealing cookies from actual users through a botnet or something.
Exactly. If I was an admin of a popular news website I would try to archive some articles and look at the access logs in the backend. This cannot be too hard to figure out.
You don't even need active measures. If a publisher is serious about tracing traitors there are algorithms for that (which are used by streamers to trace pirates). It's called "Traitor Tracing" in the literature. The idea is to embed watermarks following a specific pattern that would point to a traitor or even a coalition of traitors acting in concert.
It would be challenging to do with text, but is certainly doable with images - and articles contain those.
You need that sort of thing (i.e. watermarking) when people are intentionally trying to hide who did it.
In the archive.today case, it looks pretty automated. Surely just adding an html comment would be sufficient.
If they use paid accounts I would expect them to strip info automatically. An "obvious" way to do that is to diff the output from two separate accounts on separate hardware connecting from separate regions. Streaming services commonly employ per-session randomized stenographic watermarks to thwart such tactics. Thus we should expect major publishers to do so as well.
At which point we still lack a satisfactory answer to the question. Just how is archive.today reliably bypassing paywalls on short notice? If it's via paid accounts you would expect they would burn accounts at an unsustainable rate.