Interesting that the fake personas are active on both reddit and 4chan, which usually are anonymous or at least pseudonymous.

The "bots are filling subreddits/image boards" has been a common conspiracy theory, usually called "dead Internet theory". Apparently it is at least partially true.

I at least vaguely get the idea of having law enforcement one-on-one message people who might be planning crimes. Well, I think it should be handled with a lot of care and documentation (I mean there’s a lot of risk of entrapment with this sort of stuff, right?), but at least it is possible that it could be done in a manner than is not a net negative.

Having a bot to help radicalize people on a public, open site like Reddit seems pretty bad, though. Isn’t it more likely to produce an environment of radicalization?

I think that is why it was such a contentious topic. Bots inciting people on public forums is obviously very different from sting operations in small chat groups.

I can not conceive what the point would be, if not radicalizing the population.

It's pretty common in fosscad (3D printed firearms subreddit), that people will get DMs from brand new accounts asking them to do illegal things. Cops/Feds are really doing the least possible to entrap people

[deleted]

I mean, of course it is. Marketing companies have long since realized that they can have far more effective advertising by acting like humans, and that people will take a recommendation from another person more serious than a random ad. Propagandists have had the same realization.

If you consider how fast you can generate huge amount of random comments, it's basically a no-brainer that huge amounts of online comments are online generated.

The only real throttle is the social media platform itself, and how well it protects against fake accounts. I don't know how motivated Reddit really is at stopping them (engagement is engagement), and a quick check on Github shows that there are a bunch of readily available solvers for 4chans captchas.