I understand the appeal of hashing-based provenance techniques, though they’ve faced some significant challenges in practice that render them ineffective at best. While many model developers have explored these approaches with good intentions, we’ve seen that they can be easily circumvented or manipulated, particularly by sophisticated bad actors who may not follow voluntary standards.
We recognize that no detection solution is 100% accurate. There will be occasional false positives and negatives. That said, our independently verified an internal testing shows we’ve achieved the lowest error rates currently available for addressing deepfake detection.
I’d respectfully suggest that dismissing AI detection entirely might be premature, especially without hands-on evaluation. If you’re interested, I’d be happy to arrange a test environment where you could evaluate our solution’s performance firsthand and see how it might fit your specific use case.