That doesn't seem like a good counterargument to me. By that logic no online service should permit users to upload photos because someone might use it to share CSAM at some point. Rather than nerfing the tools implement a sensible detection and reporting pipeline.
>That doesn't seem like a good counterargument to me.
It does to me especially since he did not implement a sensible detection or reporting pipeline ahead of launching a CSAM generation tool.
Failing to do X doesn't make Y a good idea. You haven't engaged with the argument I made favoring to instead repeat a politically charged misrepresentation.
I think it's an ok counter argument. You can't have "AI should do the users bidding" and "implement a sensible detection and reporting pipeline."
I mean that is what anthropic tried here.
"Meh I'm okay with it" is by definition not a counterargument but rather a nonconstructive dismissal of whatever it is a response to.
You can in fact have both. You can have a tool that is fully functional and separately you can have a strategy for reporting suspected violations and responding to those reports. Reports can be automated assuming you can tolerate the false positive/negative rate. Particularly in the case of a subscription service such as Claude there is little reason not to implement this other than sheer greed or laziness.
In the case of Claude in particular, an unacceptably high false positive or negative rate also poses a serious problem for the current way they do things. The notable difference is that in the case of false positives it currently runs up a bill for the customer rather than the service provider.
....or even afterwards. His response was to put it behind a paywall (= start selling it).
And all the world's payment processors and almost all governments and child rights advocates are still on there.
Stunning :)