> And we're introducing Cameo, giving you the power to step into any world or scene, and letting your friends cast you in theirs.
How much are they (and providers of similar tools) going to be able to keep anyone from putting anyone else in a video, shown doing and saying whatever the tool user wants?
Will some only protect politicians and celebrities? Will the less-famous/less-powerful of us be harassed, defamed, exploited, scammed, etc.?
it seems like this is basically youtube's ContentID, but for your face. as long as you upload your "cameo" aka facial scan to them, they can recognize and control the generation of videos with it. if you don't give them your face, then they can't/won't.
"Consent-based likeness. Our goal is to place you in control of your likeness end-to-end with Sora. We have guardrails intended to ensure that your audio and image likeness are used with your consent, via cameos. Only you decide who can use your cameo, and you can revoke access at any time. We also take measures to block depictions of public figures (except those using the cameos feature, of course). Videos that include your cameo—including drafts created by other users—are always visible to you. This lets you easily review and delete (and, if needed, report) any videos featuring your cameo. We also apply extra safety guardrails to any video with a cameo, and you can even set preferences for how your cameo behaves—for example, requesting that it always wears a fedora."
Brilliant.
Until you have 2 people that are near identical. They don’t even have to be twins, there are plenty of examples where people can’t even tell other people apart. How is an AI going to do it?
You don’t own your likeness. It’s not intellectual property. It’s a constantly changing representation of a biological being. It can’t even be absolutely defined— it’s always subject to the way in which it was captured. Does a person own their likeness for all time? Or only their current likeness? What about more abstract representations of their likeness?
The can of worms OpenAI is opening by going down this path is wild. We’re not current able to solve such a complex issue. We can’t even distinguish robots from humans on the internet.
I'm an identical twin so immediately I can see a pretty stupid obvious problem with this.
If this company's guardrails end up sufficiently working well in practice (note phrases like "intended", "take measures", and "preferences...requested", on things they can't do 100%)... there will be weak links elsewhere, letting similar computation be performed without sufficiently effective guardrails against abuse?
How do we prepare for this? Societal adjustment only (e.g., disbelieving defamatory video, accepting what pervs will do)? Establishing a common base of cultural expectations for conduct? Increasing deterrence for abusers?
Looks like it requires you to film yourself from specific angles and while repeating an autogenerated phrase. Like a pre-AI selfie taken with a handwritten placard with your username and the date or whatever.
Basically deepfakes for everyone.
Normalizing effective abolishment of consent for imagery, or just consent in general for just about anything, when it can portray anyone doing anything.
Ah the great trade off that comes with little to no regulation.
Honestly this is the safest possible outcome.
If Deepfakes remain the tools of nation state actors, laypeople will be easily fooled.
If Deepfakes are available on your iPhone and within TikTok, everyone will just ask "Is it Photoshop?" for every shred of doubt. (In fact, I already see people saying, "This looks like AI".)
This is good. Normalize the magic until it isn't magic anymore.
People will get it. They're smart. They just need exposure.
> People will get it. They're smart. They just need exposure.
I really doubt this.
If you are in the creative field, your work will just be reduced to "is this slop?" or "fixed it!" with a low effort AI generated work of your original work (fuck copyright right?).
I already see artists battling and fighting putting out their best non AI work only for their audience to question if it is real and they lose the impressiveness.
This just already undermines creators who don't use AI generated stuff.
But who cares about them right? "it is the future" and it is most definitely AGI for them.
But then again, the starving artist never really made any money and this ensures that the artform stays dead.
> People will get it. They're smart. They just need exposure.
It's either this, or the opposite (eg, misinformation needs to be censored). Seems like we as a society can't quite make up our mind on which approach to take.