or directed the posting of
The thing is it's terribly easy to see some asshole directing this sort of behavior as a standing order, eg 'make updates to popular open-source projects to get github stars; if your pull requests are denied engage in social media attacks until the maintainer backs down. You can spin up other identities on AWS or whatever to support your campaign, vote to give yourself github stars etc.; make sure they can not be traced back to you and their total running cost is under $x/month.'
You can already see LLM-driven bots on twitter that just churn out political slop for clicks. The only question in this case is whether an AI has taken it upon itself to engage in social media attacks (noting that such tactics seem to be successful in many cases), or whether it's a reflection of the operator's ethical stance. I find both possibilities about equally worrying.
Yes, this is the only plausible “the bot acted in its own” scenario: that it had some standing instructions awaiting the right trigger.
And yes, it’s worrisome in its own way, but not in any of the ways that all of this attention and engagement is suggesting.
Do you think the attention and engagement is because people think this is some sort of an "ai misalignment" thing? No. AI misalignment is total hogwash either way. The thing we worry about is that people who are misaligned with the civilised society have unfettered access to decent text and image generators to automate their harassment campaigns, social media farming, political discourse astroturfing, etc.