You don't even need to do requests if you are the owner of the URL. Robot.txt changes are applied in retrospect, which means you can disallow crawls to /abc, request a re-crawl, and all snapshots from the past which match this new rule will be removed.
Trying to search the Wayback machine almost always gives me their made-up 498 error, and when I do get a result the interface for scrolling through dates is janky at best.
Archive.today has just about everything the archived site doesn't want archived. Archive.org doesn't, because it lets sites delete archives.
Wayback machine removes archives upon request, so there’s definitely stuff they don’t make publicly available (they may still have it).
You don't even need to do requests if you are the owner of the URL. Robot.txt changes are applied in retrospect, which means you can disallow crawls to /abc, request a re-crawl, and all snapshots from the past which match this new rule will be removed.
Trying to search the Wayback machine almost always gives me their made-up 498 error, and when I do get a result the interface for scrolling through dates is janky at best.
Accounts to bypass paywalls? The audacity to do it?
Oh yeah those where a thing. As a public organization they can't really do that.
I personally just don't use websites that paywall important information.