Did you miss the numerous news reports? Example: https://www.theguardian.com/technology/2026/jan/08/ai-chatbo...
For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.
First of all, the Guardian is known to be heavily biased again Musk. They always try hard to make everything about him sound as negative as possible. Second, last time I tried, Grok even refused to create pictures of naked adults. I just tried again and this is still the case:
https://x.com/i/grok/share/1cd2a181583f473f811c0d58996232ab
The claim that they released a tool with "seemingly no guardrailes" is therefore clearly false. I think what instead has happened here is that some people found a hack to circumvent some of those guardrails via something like a jailbreak.
For more evidence:
https://www.bbc.co.uk/news/articles/cvg1mzlryxeo
Also, X seem to disagree with you and admit that CSAM was being generated:
https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...
Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
This is because of government pressure (see Ofcom link).
I’d say you’re making yourself look foolish but you seem happy to defend nonces so I’ll not waste my time.
> Also, X seem to disagree with you and admit that CSAM was being generated
That post doesn't contain such an admission, it instead talks about forbidden prompting.
> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:
That article links to this article: https://x.com/Safety/status/2011573102485127562 - which contradicts your claim that there were no guardrails before. And as I said, I already tried it a while ago, and Grok also refused to create images of naked adults then.
> That post doesn't contain such an admission, it instead talks about forbidden prompting.
In response to what? If CSAM is not being generated, why aren't X just saying that? Instead they're saying "please don't do it."
> which contradicts your claim that there were no guardrails before.
From the linked post:
> However content is created or whether users are free or paid subscribers, our Safety team are working around the clock to add additional safeguards
Which was posted a full week after the initial story broke and after Ofcom started investigative action. So no, it does not contradict my point which was:
> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:
As you quoted.
I really can't decide if you're stupid, think I and other readers are stupid, or so dedicated to defending paedophilia that you'll just tell flat lies to everyone reading your comment.
Leave your accusations for yourself. Grok already didn't generate naked pictures of adults months ago when I tested it for the first time. Clearly the "additional safeguards" are meant to protect the system against any jailbreaks.
Just to be clear, I'm to ignore:
* Internet Watch Foundation
* The BBC
* The Guardian
* X themselves
* Ofcom
And believe the word of an anonymous internet account who claims to have tried to undress women using Grok for "research."
> First of all, the Guardian is known to be heavily biased again Musk.
Says who? Musk?
That is only "known" to intellectually dishonest ideologues.
>First of all, the Guardian is known to be heavily biased again Musk.
Biased against the man asking Epstein which day would be best for the "wildest" party.
[flagged]
>First of all, the Guardian is known to be heavily biased again Musk.
Which is good, that is the sane position to take these days.