Since subreddits related to identifying AI images/videos got very popular, my wife started to send me cute AI generated videos, older family members can't distinguish AI videos at all, I've decided to code a weekend side project to train their Spidey sense for AI content.

https://IsThisAI.lol

The content is hand picked from tiktok, Instagram, Facebook, Reddit and other AI generating platforms.

Honestly I don't know where I'm going with this, but I felt the urge to create it, so here it is.

I learned how to optimize serving assets on CloudFlare.

Feedback welcome.

Tricky! I often also guess wrong. But I noticed it has some bug. Sometimes I can click either option "AI Generated" or "Real" and nothing happens. Even if I click 10 times, still nothing happens. The buttons must have some broken event handling or something.

EDIT: Hm, I switched tab, away to write this comment, now that I switched back, it showed me that I clicked correctly. So it seems, that sometimes it just has huge delay in accepting my choice?

The project got some traction, over 5k requests since I posted this. Probably the DB state needs to be optimized a bit. Thank you for reporting! I really appreciate it

Edit: I don't see slow traces in Sentry. No idea what caused this. Also, voting goes through redis and the dB load is low. Weird. I probably have to add gunicorn workers.

Edit2: Bumped gunicorn workers from 2 to 4. Should be fine now, under the current load. Again, thank you for reporting!

I am too skeptical of videos and way too trusting of photos I guess.

+1 for training parents' tech literacy.

I dunno if/how this could be taught, but I feel like half the battle is critical thinking with an adversarial mindset towards media -- who would make this, why would they want to show me, do I see anything that makes this impossible, is it worth engaging with in the first place, can I fact check this.

Yep, my thoughts exactly. But the consumer rarely thinks critically when looking at ads, not to mention regular social media posts and the Big Corp has no money in proving what assets are AI generated.

I'm trying to gamify the training to make the experience more appealing.

I store a "proof URL" on the backend, but I don't know if it makes sense to serve it to the end user. Also, a Reddit discussion is not necessarily a proof one wants. A fingerprint would be better, but not all images are generated with Google. That's another problem to be solved.

I love this. Each time my parents need their wifi fixed I'm going to make them do 5 of this app before it get to work.

Thank you for the kind words. I don't expect it to spread like fire, but I'd appreciate if you could share it with your folks. I don't intend to monetize it, my goal is to have some small daily traffic.

It's SFW and localized to the most popular languages.

Very fun. You have hidden the controls on the video, is it because you want it to be more of a game and prevent people, normies at least, from seeking through the video or is it for some other reason?

How do you know that the videos are AI or not? For many of them it's pretty clear, but I imagine not every user is labelling their AI uploads properly

I only add the ones where its proved its AI, fe. if it has SynthID or some users found obvious AI mistakes. I have adding proof on the roadmap, but it's a bit tidious and there's no point in making it without a traffic.

It would be cool if you could see the image/video again after choosing. I always want to watch it again after a wrong guess.

Hmm, I will consider adding "close" button for the overlay popup. Thanks for the feedback

What does the "this one is controversial" label mean? Does it just mean the voters are split or does it mean its not known whether or not the image/video is actually AI-generated?

I somewhat like it for what it is, but expected something else based on description. This is just a real/ai guesser that doesn't really train you at all.

I think that it's a great opportunity to play with relatives. Each person can explain why/why not and that's probably the main point.

It'll also probably shut the mouth of those who think that they know better. This works with the driving license. Start a test with the whole family and watch the older men get a reality check.

I like it too. But I think the training is to realize that human brains are already far behind detecting AI generated content as of 2025/26, and our brains probably won't ever catch up.

The question is can you be trained? Beside the obvious case, some IA generated photo could not be distinged from real one.