This is a great example on why Elon is right. AI should be a tool that does the users bidding, and not a moral agent that nerfs itself to protect some arbitrary line it has.
This is a great example on why Elon is right. AI should be a tool that does the users bidding, and not a moral agent that nerfs itself to protect some arbitrary line it has.
This is an argument for open models, where you can run your model with your system prompt on your hardware, which prevents the provider from arbitrarily injecting system prompts.
This is an argument for open source tooling (like opencode) and open models (like deepseek).
Grok is not an open model, Elon does not get any credit for anything here.
Counterpoint: generated CSAM on his platform.
Additional counterpoint "mechahitler" chatbot. For those who have forgotten https://www.forbes.com/sites/tylerroush/2025/07/09/elon-musk...
That doesn't seem like a good counterargument to me. By that logic no online service should permit users to upload photos because someone might use it to share CSAM at some point. Rather than nerfing the tools implement a sensible detection and reporting pipeline.
>That doesn't seem like a good counterargument to me.
It does to me especially since he did not implement a sensible detection or reporting pipeline ahead of launching a CSAM generation tool.
Failing to do X doesn't make Y a good idea. You haven't engaged with the argument I made favoring to instead repeat a politically charged misrepresentation.
I think it's an ok counter argument. You can't have "AI should do the users bidding" and "implement a sensible detection and reporting pipeline."
I mean that is what anthropic tried here.
"Meh I'm okay with it" is by definition not a counterargument but rather a nonconstructive dismissal of whatever it is a response to.
You can in fact have both. You can have a tool that is fully functional and separately you can have a strategy for reporting suspected violations and responding to those reports. Reports can be automated assuming you can tolerate the false positive/negative rate. Particularly in the case of a subscription service such as Claude there is little reason not to implement this other than sheer greed or laziness.
In the case of Claude in particular, an unacceptably high false positive or negative rate also poses a serious problem for the current way they do things. The notable difference is that in the case of false positives it currently runs up a bill for the customer rather than the service provider.
....or even afterwards. His response was to put it behind a paywall (= start selling it).
And all the world's payment processors and almost all governments and child rights advocates are still on there.
Stunning :)
“Think of the children”
grok, why are there slurs in my code?
If the user explicitly requested that is it really a problem with the tool at that point?
Yes
I’ll just leave this here: https://www.businessinsider.com/grok-ai-elon-musk-is-more-fi...