Like how California's bylaw about cancer warnings are useless because it makes it look like everything is known to the state of California to cause cancer, which in turn makes people just ignore and tune-out the warnings because they're not actually delivering signal-to-noise. This in turn harms people when they think, "How bad can tobacco be? Even my Aloe Vera plant has a warning label".
Keep it to generated news articles, and people might pay more attention to them.
Don't let the AI lobby insist on anything that's touched an LLM getting labelled, because if it gets slapped on anything that's even passed through a spell-checker or saved in Notepad ( somehow this is contaminated, lol ), then it'll become a useless warning.
The downside to having labels on AI-written political comments, stellar reviews of bad products, speeches by a politician, or supposed photos of wonderful holiday destinations in ads targeted at old people are what, exactly?
Are you really arguing that putting a label on AI generated content could do more harm than just leaving it (approximately) indistinguishable from the real thing might somehow be worse?
I'm not arguing that we need to label anything that used gen AI in any capacity, but past the point of e.g. minor edits, yeah, it should be labeled.
None of those AI written political comments will have the label added because it's unprovable, and those propaganda shops are based well outside of the necessary jurisdiction anyway. It will just be a burden on legitimate actors and a way for the government to harass legitimate media outlets that it doesn't like with expensive "AI usage investigations."
I bought a piece of wooden furniture some time ago. It came with a label saying that the state of California knows it to be a carcinogen. I live in Belgium. It was weird.
It's not even a good argument. Studies have demonstrated it reduces toxic chemicals in the body, and also deters companies from using the toxic chemicals in their products.
> Don't let the AI lobby insist on anything that's touched an LLM getting labelled, because if it gets slapped on anything that's even passed through a spell-checker or saved in Notepad
People have been writing articles without the help of an LLM for decades.
You don't need an LLM for grammar and spell checking, arguably an LLM is less efficient and currently worse at it anyway.
The biggest help a LLM can provide is with research but that is only because search engines have been artificially enshitified these day. But even here the usefulness is very limited because of hallucinations. So you might be better off without.
There is no proof that LLMs can significantly improve the workflow of a professional journalist when it comes to creating high quality content.
So no, don't believe the hype. There will still be enough journalists not using LLMs at all.
It is worse, even less than useless. With the California case, there is very little go gain by lying and not putting a sticker on items that should have one. With AI generated content, as the models get to the point we can't tell anymore if it is fake, there are plenty of reasons to pass off a fake as real, and conditioning people to expect an AI warning will make them more likely to fall for content that ignores this law and doesn't label itself.
Obviously it should not apply to anything using machine learning based algorithms in any way, just content made using generative AI, with exceptions for minor applications and/or a separate label for smaller edits.
Publishing is more than just authoring. You have research, drafts, edits, source verification, voice, formatting, multiple edits for different platforms and mediums. Each one of those steps could be done by AI. It's not a single-shot process.
Spell check, autocomplete, grammar editing, A-B tests for bylines and photo use, related stories, viewers also read, tag generation
I guess you have to disclose every single item on your new site that does anything like this. Any byte that touches a stochastic process is tainted forever.
Please no. I don’t want that kind of future. It’s going to be California cancer warnings all over again.
I don’t like AI slop but this kind of legislation does nothing. Look at the low quality garbage that already exists, do we really need another step in the flow to catch if it’s AI?
I don't think there's any way for that to happen, and IF we could create a solid legislative framework, AI could definitely (at some point in the future) contribute more good than bad to society.
That could do more harm than good.
Like how California's bylaw about cancer warnings are useless because it makes it look like everything is known to the state of California to cause cancer, which in turn makes people just ignore and tune-out the warnings because they're not actually delivering signal-to-noise. This in turn harms people when they think, "How bad can tobacco be? Even my Aloe Vera plant has a warning label".
Keep it to generated news articles, and people might pay more attention to them.
Don't let the AI lobby insist on anything that's touched an LLM getting labelled, because if it gets slapped on anything that's even passed through a spell-checker or saved in Notepad ( somehow this is contaminated, lol ), then it'll become a useless warning.
> That could do more harm than good.
The downside to having labels on AI-written political comments, stellar reviews of bad products, speeches by a politician, or supposed photos of wonderful holiday destinations in ads targeted at old people are what, exactly?
Are you really arguing that putting a label on AI generated content could do more harm than just leaving it (approximately) indistinguishable from the real thing might somehow be worse?
I'm not arguing that we need to label anything that used gen AI in any capacity, but past the point of e.g. minor edits, yeah, it should be labeled.
None of those AI written political comments will have the label added because it's unprovable, and those propaganda shops are based well outside of the necessary jurisdiction anyway. It will just be a burden on legitimate actors and a way for the government to harass legitimate media outlets that it doesn't like with expensive "AI usage investigations."
I bought a piece of wooden furniture some time ago. It came with a label saying that the state of California knows it to be a carcinogen. I live in Belgium. It was weird.
Just an observation, but this California meme seems like the go-to talking point for anti AI regulation crowd lately.
It's not even a good argument. Studies have demonstrated it reduces toxic chemicals in the body, and also deters companies from using the toxic chemicals in their products.
> Like how California's bylaw about cancer warnings are useless
Californians have measurably lower concentrations of toxic chemicals than non-California's, so very useless!
> Don't let the AI lobby insist on anything that's touched an LLM getting labelled, because if it gets slapped on anything that's even passed through a spell-checker or saved in Notepad
People have been writing articles without the help of an LLM for decades.
You don't need an LLM for grammar and spell checking, arguably an LLM is less efficient and currently worse at it anyway.
The biggest help a LLM can provide is with research but that is only because search engines have been artificially enshitified these day. But even here the usefulness is very limited because of hallucinations. So you might be better off without.
There is no proof that LLMs can significantly improve the workflow of a professional journalist when it comes to creating high quality content.
So no, don't believe the hype. There will still be enough journalists not using LLMs at all.
Imagine selling a product with the tagline: "Unlike Pepsi, ours doesn't cause cancer."
It is worse, even less than useless. With the California case, there is very little go gain by lying and not putting a sticker on items that should have one. With AI generated content, as the models get to the point we can't tell anymore if it is fake, there are plenty of reasons to pass off a fake as real, and conditioning people to expect an AI warning will make them more likely to fall for content that ignores this law and doesn't label itself.
What does that mean though? Photos taken using mobile camera apps are processed using AI. Many Photoshop tools now use AI.
Obviously it should not apply to anything using machine learning based algorithms in any way, just content made using generative AI, with exceptions for minor applications and/or a separate label for smaller edits.
How do we know what’s AI-generated vs. sloppy human work? Of course in some situations it is obvious (e.g., video), but text? Audio?
And of course you can even ask AI to add some "human sloppiness" as part of the prompt (spelling mistakes, run-on sentences, or whatever).
Publishing is more than just authoring. You have research, drafts, edits, source verification, voice, formatting, multiple edits for different platforms and mediums. Each one of those steps could be done by AI. It's not a single-shot process.
Where we put the line within AI-generate vs AI-assisted (aka Photoshop and other tools)?
> Ideally, trying to pass anything AI-generated as human-made content would be illegal, not just news, but it's a good start.
Does photoshop fall under this category?
Spell check, autocomplete, grammar editing, A-B tests for bylines and photo use, related stories, viewers also read, tag generation
I guess you have to disclose every single item on your new site that does anything like this. Any byte that touches a stochastic process is tainted forever.
None of those things are "AI" (LLMs). We had those things before, we'll have them after.
Fully agreed.
Please no. I don’t want that kind of future. It’s going to be California cancer warnings all over again.
I don’t like AI slop but this kind of legislation does nothing. Look at the low quality garbage that already exists, do we really need another step in the flow to catch if it’s AI?
You legislate these problems away.
Ideally, we would just ban AI content altogether.
I don't think there's any way for that to happen, and IF we could create a solid legislative framework, AI could definitely (at some point in the future) contribute more good than bad to society.