We live in an absurd era where "AI Safety" means "AI that doesn't listen to the human telling it what to do".
It'll all be rather funny in retrospect.
We live in an absurd era where "AI Safety" means "AI that doesn't listen to the human telling it what to do".
It'll all be rather funny in retrospect.
It will be funny if it isn’t social engineering.
But if we find it drifts further and further from the truth in cases of biases in news articles, image generation and others we will find ourselves bombarded with historical deviances where everyone can be nudged to anything.
All in the name of safety.
that's why the AI capabilities should be as decentralized and "localized" as possible - aka, i want to own the hardware and software for LLM, image generation, etc etc.
Until these ai capabilities are as neutral and un-discriminatory as electricity, centralized production means centralized control and policies. Imagine if you are not allowed to use your electricity to power some appliances, because the owner of the power-plant feels it's not conducive to their agenda.
They're struggling, it seems; AI can generate anything, but that includes stuff that goes against laws and morals, so they spend a lot of time to lock it down to avoid that, but people's creativity with prompts and escaping the safeguards knows no bounds. It's basically like the fight against spam, an endless game of whack-a-mole where usefulness fights with decency.
Anything that is vaguely present in their training data (see, a wine glass filled to the brim [1][2]), that is.
https://medium.com/@joe.richardson.iii/the-curious-case-of-t... https://medium.com/@joe.richardson.iii/openai-slaps-a-band-a...
[dead]