I wonder how much of this will just encourage protests and radicalization. If your agent is trained to match a profile of a radical, then it necessarily is spreading and encouraging that radical messaging in order to fit in and gain trust. At least with real agents there is a plausible mechanism for their judgement to filter out who is targeted and they can't infinitely propagate like the AI could.
It's already somewhat normal for cops to try and radicalize people to create evidence for arrests so it's only a question of scaling up, right?
That's what I worded it the way I did. At least theoretically the humans have some judgement that could limit who they go after or how far they push and can't propagate infinitely. The issue with AI is the even greater lack of accountability and the potential for its messaging to more easily hit a critical mass. So far, the human version tends to focus on very small groups or subgroups. The scaling seems like a much bigger threat with different possible societal effects.
>potential for its messaging to more easily hit a critical mass.
Like that time we funded a minor regional insurgency that went on to a) kick out the Russians b) run their own country c) attack us d) kick us out e) run their own country.
The feds losing control of their assets has been a meme ever since Kennedy ate a bullet.
>>The feds losing control of their assets has been a meme ever since Kennedy ate a bullet.
Hey, that's not fair. The problem of governments running authoritarian operations that have wider reaching consequences as they spiral out of State control is MUCH older than that...
>wider reaching consequences as they spiral out of State control is MUCH older than that.
Alea iacta est
You saying Caesar was a spook?
It was just an example of the state losing control of an instrument they had built up to increase their own power.
Yeah, that's fair. Without oversight and just doing it on Reddit for example you could get broad radical bases.
On the other hand, more people to arrest or crack down on? And then they can't vote, so that's parsimonious for some actors.
>“Supervisor [Cavanaugh] ultimately voted for the agreement because Massive Blue is alleged to be in pursuit of human trafficking, a noble goal,” a representative from Cavanaugh’s office told 404 Media in an email. “A major concern regarding the use of the application, is that the government should not be monitoring each and every citizen. To his knowledge, no arrests have been made to date as a result of the use of the application. If Overwatch is used to bring about arrests of human traffickers, then the program should continue. However, if it is just being used to collect surveillance on law-abiding citizens and is not leading to any arrests, then the program needs to be discontinued.”
It's a direct ultimatum: radicalize/encourage the targets, or lose the contract.
At least the government spokesperson is focusing on the actual crime, and not the "we'll report on political opponents" service.
So the thing about "crime" is that everyone is doing it daily, and just not knowing/being caught. The whole "we'll report on political opponents" is absolutely the value of the software, it's just not reporting anything they can use in court, so it's just boring spying.
The cynical take is that the whole "law-abiding citizens" differentiation is just a way to say "I want this focused on out-group, not targeting in-group bad guys we're not interested in prosecuting". Think about who runs around saying "true Americans" or "patriots" to mean "my coalition" as opposed to a more objective definition.
> At least theoretically the humans have some judgement that could limit who they go after or how far they push and can't propagate infinitely.
While the scale is certainly limited, the judgement is not. Cops have been known to use convicted sex criminals, and even medically diagnosed psychopaths to either entrap, provocateur or as foreign agents. There is a famous case in Iceland where the FBI used a convicted sex felon and a known psychopath Siggi Hakkari as a foreign agent to spy in the Icelandic Parliament. (https://en.wikipedia.org/wiki/Sigurdur_Thordarson)
With cops we can safely assume a complete lack of morality and judgement.
They use "bad people" because the rest of the system is predisposed ignore them when they say "the cops made me do X and I have the text messages to prove it".
Compromised individuals become assets because they are easiest to coerce. No one volunteers to do this stuff out the goodness of their heart.
There's a sci-fi story in there somewhere, about a government overthrown by a revolution orchestrated by AI designed to go undercover
If there isn’t, an AI could write it: https://github.com/lechmazur/writing?tab=readme-ov-file
Why should I, as a reader, spend the time to read a meaningless story that no person could be arsed to write, when I could spend a lifetime reading and never get through only the best works humanity has actually created?
There is no shortage of fiction that we need language models to address.
Honestly I feel the same about all model outputs that are passed off as art.
If it's not worth the time for the creator to make, why is it worth my time as an audience member to consider it?
There is a whole world of real artists dying for audiences. I'll pay them in attention and money. Not bots. There's no connection to be had with a bot or its output
I'm asking out of curiosity and as I explore my own thoughts on this: is there any level of AI involvement in art creation that you think is acceptable?
For instance, I write short stories (i.e., about 8-30k words). I've done it before AI, and outside some short stories I've had published I mostly just do it for my own sense of creative expression. Some of my friends and family read my stuff, but I am not writing for an audience, at least I haven't yet.
One thing I've experimented with recently is using AI as an editor, something I've never had because I'm not a professional, and I do not want to burden my friends and family with requests for feedback on unfinished works. I create the ideas (every short story I've ever written has at least 5k words in a "story bible"), I write the words on the page. In my last two stories, I've tested using AI to give me feedback on consistency of tone, word repetition, unidentifiable motivations, etc.
While the feedback I get often suggests or observes things that are done intentionally, it also has provided some really useful observations and guidance and has incrementally made my subsequent writing better. Thus far I don't feel like I've lost any of the authorship of the product, but I also know that for some any AI used spoils the pot.
For the above, I did spend time (a lot of it) to make it, but does any use of AI in any capacity render it not worth your time to read it? I am asking sincerely!
AI giving feedback on your story is a few steps up from using spell check or grammarly. Harmless enough, right?
But consider this: any feedback the AI gives us 1) not intelligent, merely guessing the next word based on similar requests for feedback it has seen and 2) always pushing your story toward a more generic version.
Not to mention, there are thousands of excellent human editors out there who's livelihood is under threat from AI editors just as much as writers are from AI writers, coders from AI coders.
I experimented with AI writing tools when they came out first. I was excited by what LLMs could do. Fiction is one place they excel, because hallucinations don't matter. Eventually I came around to the viewpoint that no matter how cool it useful they are, they're not a good thing.
AI is already destroying the publishing industry, and making it very difficult for human writers to get noticed in a sea of robot submissions. There are lots of people out there who won't want to read something if AI was used, for that reason
This is fair, and I agree with all of this. I do think the idea that is pushes towards a more generic story is only true on the assumption that all feedback received is taken, rather than just using it as something to think about. Since I'm not and have no intention of being commercial, the point about human editors or the industry is neither here nor there for me.
I think where I end up on this is that in my limited use, nothing in the end product has felt like anything other than mine because I know how the ideas and words got to the page. BUT: I can't trust that is true for others that use AI, and why I personally hope (perhaps hypocritically) that AI-assisted work should be clearly signed and is not something I want to read. I think a light touch is helpful, anything more is compromising.
Anyway, this all helped me think through things a bit. I think I will continue to use AI as a feedback mechanism, but only after I have "finished" my work so I can consider where I might have done it different as a source of potential learning for the next project.
> any use of AI in any capacity render it not worth your time to read it?
Not the person you replied to, but for me yes absolutely. If it wasn't worth your time to create then it's not worth my time to consume
People like you are starting to describe their work as "ai assisted" but I don't agree with this. It is "ai generated, human assisted". Why would you bother making something where you're only the assistant to the machine? It's kind of pathetic to be proud of this imo
Personally if I could change a setting in my brain that immediately flagged any piece of work with any amount of AI generation in it, I would spend the rest of my life happily avoiding them. There is too much work created by genuine human artists to bother with the soulless AI slop
> If it wasn't worth your time to create then it's not worth my time to consume... It is "ai generated, human assisted"
I truly do understand this, but this line you and others keep using isn't actually helping me understand your position. I spent three months writing my last 7k short story, it was the culmination of an idea that took me a long time to work through. Not a single line was written or suggested by AI (an overt part of the prompt), nothing in the "story bible" was informed by AI in any way. AI helped observe grammar and structure and uncertainty as feedback for me, but it didn't create anything. Nothing was copy-pasted (literally or essentially).
I don't really put labels on my writing because they are again, just for me (when others read them, it's because they ask to, and yes I've made clear how AI aided in editing in the last two). Nevertheless, I just don't see it this way at all given what I wrote above. The fun part is the execution of the idea, I've no interest in robbing myself of that.
But I can also appreciate that given the private nature of what I'm doing, the stakes are lower so believing what I say is easy. If someone asked me to read a self-described "AI-assisted" story, I probably wouldn't want to because I wouldn't know how much to believe them if they said what I said above.
https://youtu.be/fDyr1JMNHVk?si=GCWlCVeNAUXPZfZJ
You may find the intro of my favourite X-files epsiode. Take a guess what the voice is.
I don't know, the time to be out on in the streets throwing rocks was ~25 years ago. It might be too late now (mass surveillance and a fascist-wanna be government).
The only time it's too late to resist totalitarianism in its various forms is when you're dead.
"More friends, bigger rocks, help defeat our enemies" is basically the basis of all of human civilization, so look to the wisdom of your ancestors.
don't bring a cell phone, wear a mask, wear different shoes to change your gait, STFU -- none of these are hard.
"Fighting fascism is a full time job!"
— Lieutenant Shaxs
Hello, land of the resigned.
How long until people use this to radicalize the cops? Thin Blue Line: Even Thinner Edition
Imagine people using bots to make interdepartmental conflicts that turn violent. The guy in precinct 32 is sleeping with my wife, I'm sure of it, I've seen the proof online. That kind of stuff.
It's difficult to imagine there is any room remaining on the margins for American cops to be any more extreme than they already are.
The goal would be to fracture their organizations so that they're attacking each other instead of the protesters.
As someone that has worked in proximity to law enforcement a fair amount this would likely have some effect on outlier officers that are not part of the 'cop cult'. The problem is the worst actors, the ones you really want to get rid of are going to be more resistant to said tactics when the start pitting good friends against each other.
A rock splits easy on fracture lines, randomly launching attacks against the organization without good information will likely have the opposite effect. "Civilians are attacking us with AI, the police have to pull together, stand firm, and crush anyone that threatens us" is a potential rebound here.
Yes this is obviously a tactic that only works if the organization that it is used against is not aware that it is being used.
Law enforcement is not unique in this regard.
You want to astroturf the sheriffs' facebook feed with memes like "Woke Local Cops REFUSE to Taser MS-13 CEO"?
You say that as if that wasn’t the goal.