Though this outage may be more related to the copy.fail upgrade cycle, it reminds me of a thought I've had recently in respect of agents.
In the UK they have this issue called "TV pickup" (https://en.wikipedia.org/wiki/TV_pickup). TV pickup is where everyone in the UK watching a popular TV show gets up to boil a high-powered tea kettle at the same time on an ad break. This causes a temporary surge in electricity demand and leads to real outages. It was a mystery at first but now is accounted for.
I suspect the global internet is facing an "agent pickup" problem where significant changes (e.g., releases of new frontier models or new package versions) puts unpredictable pressure on arbitrary infrastructure as millions of distributed agents act to address the change simultaneously.
In the US we have the Super Bowl Flush: https://medium.com/nycwater/the-big-flush-on-super-bowl-sund...
It's literally the plot of https://en.wikipedia.org/wiki/Flushed_Away
Well, that and the rush to upgrade for copy.fail.
Has Ubuntu published patches yet?
Yes, but I can currently only load the page about them via the Wayback Machine: https://web.archive.org/web/20260430191621/https://ubuntu.co...
Patch published to disable the affected module. No patch for the module itself yet.
We're at the stage where we blame AI for anything as a first reaction?
(Love the tv pickup story. I also thought of that, in other situations)
I wasn't blaming this issue on that in particular, just making an more general observation in line with the post. I'll make that clearer.
Indeed. It is far more likely to be the copyfail issue.
> leads to real outages.
Um, no.
I daresay you could find the odd example, as for any grid in a stressed situation, but it's not like we turn to each other every week in the dark and say "Oh, it must be half time at the Manchester United match".