The problem of authenticating footage has a disappointing but sufficient solution:
1.- Use filming devices that sign the footage with a key, and the device has some anti-tamper protections to prevent somebody from stealing the key.
2.- The thing above is useless for most consumers of the footage, which would only see it after three to four transcodings change the bytes beyond recognition. But in a few years everybody will assume most footage in the Internet is fake, and in those cases when people (say, law enforcement) want to know for sure if something really happened, they’ll have to go through the trouble of procuring and watching the original, authenticated file.
The alternatives to 1 and 2 are:
a) To engage in an arms race, like the one which is happening with captchas right now.
b) To roll back this type of AI.
b is not going to happen, even with a societal collapse and sweeping laws against GenAI, because the way GenAI works is widely known. Unless we want to roll back technology itself and stop producing the hardware, and culture so that people don’t know how to do any of this.
The whole "digital cameras with unhackable authenticity signatures" idea can kind of be broken by just pointing a camera at a screen. It sounds a bit ridiculous but screen density and dynamic range is quite high these days, with the right setup you could play a deepfake video and get it to look real AND have a signature saying it's totally authentic
Film worked for a long time (and still works) despite the availabilty of film printers.