No idea how they actually do it, but I wouldn't be surprised if manual reports and actions play a big role. The policy doesn't need to be enforced reliably as long as it is plausible for reasonably big actors to get caught sooner or later and the consequences of getting caught are business-ruining.

But detecting it on a technical level shouldn't be hard either. Visit the page, take a screenshot, have an AI identify the dismiss button on the cookie/newsletter popups, scroll a bit, click something that looks inactive, check if the URL changes, trigger the back action. Once a suspicious site is identified, put it in the queue for manual review.

The URL does not even need to change, you can pushState with just a JavaScript object, catch the pop and do something like display a modal. (I use this pattern to allow closing fullscreen filter overlays the user opened)

Still, requires user interaction, on any element, once. So the crawler needs to identify and click most likely the consent/reject button. Which may not even trigger for Googlebot.

So they likely will rely on reports or maybe even Chrome field data.

Field data is a great point - it should be really obvious when people click "back", and many then click back again immediately after (or close the tab, or whatever people do to "escape").