It's obvious there is no way OpenAI can keep videos generated by this within their ecosystem. Everything will be fake, nothing real. We are going to have to change the way we interact with video. While it's obviously possible to fake videos today, it takes work by the creator and takes skill. Now it will take no skill so the obvious consequence of this is we can't believe anything we see.

The worst part is we are already seeing bad actors saying 'I didn't say that' or 'I didn't do that, it was a deep fake'. Now you will be able to say anything in real life and use AI for plausible deniability.

I predict a re-resurgence in life performances. Live music and live theater. People are going to get tired of video content when everything is fake.

The Sora 2 livestream indicates that videos exported from the app will have visual watermarks.

Sure, then you just pump it through another model that removes watermarks.