Exactly, that is my question too. If you can detect the lighting variations to read and verify the code, then you can also extract them, remove them, reapply to the edited version or the AI version... varying the level of global illumination in a video is like the easiest thing to manipulate.
Although there's a whole other problem with this, which is that it's not going to survive consumer compression codecs. Because the changes are too small to be easily perceptible, codecs will simply strip them out. The whole point of video compression is to remove perceptually insignificant differences.
As I understand it, the brilliant idea is that the small variantions in brightness of the pixels look just like standard noise. Distinguishing the actual noise from the algorithm is not possible, but it is still possible to verify that the 'noise' has the correct pattern.
Correct pattern for the correct time span matching random fluctuations in the electrical grid.
I think that will be handled by the AC to DC conversion in most systems.
Nope. Mains hum is picked up by microphones as well as light intensity:
https://en.wikipedia.org/wiki/Electrical_network_frequency_a...