It's rare that I think an academic paper from a good school that is trending on HN is actively stupid, but this is that paper.

If you're even considering going to go to all the trouble of setting up these weird lights and specialized algorithms for some event you're hosting, just shoot your own video of the event and post it. Done.

"Viewers" aren't forensic experts. They aren't going to engage with this algorithm or do some complex exercise to verify the private key of the algorithm prior to running some app on the video, they are just going to watch it.

Opponents aren't going to have difficulty relighting. Relighting is a thing Hollywood does routinely, and it's only getting easier.

Posting your own key and own video does nothing to prove the veracity of your own video. You could still have shot anything you want, with whatever edits you want, and applied the lighting in software after the fact.

I'm sure it was fun to play with the lights in the lab, but this isn't solving a problem of significance well.

I think you might have misunderstood some core use cases.

One significant problem currently is long form discussions which are taken wildly out of context for the sake of propaganda, cancelling or otherwise damaging the reputation of those involved. The point isn't that a given video isn't edited originally, but that the original source video can be compared to another (whether the original was edited or not is neither here nor there).

I'm not saying this solution is the answer, but attempts to be able to prove videos were unedited from their original release is a pretty reasonable goal.

I also don't follow where the idea that viewers need to be forensic experts arises from? My understanding is that a video can be verified as authentic, at least in the sense of the way the original author intended. I didn't read that users would be responsible for this, but rather that it can be done when required.

This is particularly useful in cases like the one I highlighted above; where a video may be re-cut to make an argument the person (or people) in question never made (and which might be used to smear said persons–a common occurrence in the world of long form podcasting as an example).

It would be interesting to know if you could write software to take a video with these flashes in it, post-process them out, morph the video to be taken from another angle, add in a different signature. Then claim the first video is fake and that the 2nd video is the true unedited version.

Total Relighting SIGGRAPH Talk: https://www.youtube.com/watch?v=qHUi_q0wkq4

Physically Controllable Relighting of Photographs: https://www.youtube.com/watch?v=XFJCT3D8t0M

Changing the view point post process: https://www.youtube.com/watch?v=7WrG5-xH1_k

It would be pretty cool to live in that word, where a maliciously edited video can be met with a better verified, full version of it.

I don’t think that’s where we are, right? People are happy to stop looking after they see the video that confirms their negative suspicions about the public figure on the other team, and just assume any negative clips from their own team are taken out of context.

While I don't know if the paper is "stupid", or not, I think nobody in the last two decades has ever seen an uncut interview. So I don't see how this light would help or proof anything.

I think it is a current propaganda or messaging strategy: you say “In the uncut recording of the interview, I made really good points, but they spliced it up to make me look stupid,” or “In the uncut version of the interview, my opponent said a bunch of nonsense, but they cut it out.” This works because the broadcaster isn’t going to play the uncut version, and even if they did, nobody would bother watching it.

The broadcaster doesn't need to run the unedited video--anyone can keep a copy. Somebody edits the wrong answer onto a question, produce your unedited recording and point out what they did. (And, in a perfect world, sue for defamation.)

Even in a world where the common folk all accepted that such watermarking was a real phenomenon, they wouldn't ever verify it themselves. Even if they wanted to verify it themselves, there would need to be a chain of trust to actually verify what the watermark should be. And in the circles where fake videos circulate, that chain of trust will be distrusted, too.

This can be used for automated detection and flagging.

I’m under the impression this isn’t for end users, it’s for enforcement within context of intellectual property.

I’m curious to see what the value proposition is as it’s unclear who would be buying this and why. I suppose platforms might want it to prove they can help or offer services to enforce brand integrity, maybe?

The central problem seems to be that the people who are in a position to benefit from claiming something is fake that's actually real are the same ones you have to trust to determine whether it's fake, since the viewer can't determine that (even if they provide a black-box program that supposedly checks this, you can't know what it really does so the same trust problem exists). Maybe this would be useful for a while in an organization to be sure employees aren't using editing and tools on video.

Yes, I think that the state of modern video generation has made an uncomfortable truth more clear - All Evidence is hearsay, only as trustworthy as the people you're getting it from. For a brief shining moment video evidence was easy to produce but hard to forge, but that's not been the case for most of history. That's why the law has so much detail about evaluating the trustworthiness of witnesses.