> what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
You would be _amazed_ at the things that people commit to email and similar.
Here's a Facebook one (leaked, not extracted by authorities): https://www.reuters.com/investigates/special-report/meta-ai-...
I mean, the example you link is probably an engineer doing their job of signalling to hierarchy that something went deeply wrong. Of course, the lack of action of Facebook afterwards is a proof that they did not care, but not as much as a smoking gun.
A smoking gun would be, for instance, Facebook observing that most of their ads are scam, that the cost of fixing this exceeds by far "the cost of any regulatory settlement involving scam ads.", and to conclude that the company’s leadership decided to act only in response to impending regulatory action.
https://www.reuters.com/investigations/meta-is-earning-fortu...
Eh? The thing I linked to was a policy document on what was allowed.
> “It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.”
This is not a bug report; this is the _rules_ (or was the rules; Facebook say they have changed them after the media found out about them).
>I mean, the example you link is probably an engineer doing their job of signalling to hierarchy that something went deeply wrong.
and? is that not evidence?