This is inverted. AI books should come with warning labels similar to those found in cigarettes.

> AI books should come with warning labels

I disagree. AI use is diffuse. An author is specific. Having people label their work as AI free is accountable in a way trying to require AI-generated work be labeled is not.

> similar to those found in cigarettes

Hyperbole undermines your argument. We have decades of rigorous and international evidence for the harms from cigarettes. We don’t for AI.

Saying "I think X should have a warning, like cigarettes do" is not the making the claim "X is harmful in a way that is comparable to cigarettes." The similarity is that cigarettes have warning labels and not that AI is harmful on the order of cigarettes.

> similarity is that cigarettes have warning labels and not that AI is harmful on the order of cigarettes

We put warnings on cigarettes because not only are they harmful, but we have ample evidence about how and by what mechanism they are harmful.

The logic that leads to labelling every harm is the one that causes everything in California to be labeled as a cancer hazard. You want tight causation of sufficient magnitude and in a way that will cause actual behavior change, either through reduced demand or changed producer behavior. Until you have that-which we did for cigarettes—labelling is a bit silly.

That's fine but doesn't give you license to put words in other people's mouths. Maybe they want AI labeled for transparency. Maybe it's a matter of personal preference. Or maybe they're following the precautionary principle and don't want to wait for the evidence of harm, but rather for the evidence of the absence of harm.

There's an infinite number of positions they could hold, and the discussion works better if you ask rather than assume.