Is this gonna be Turnitin[1] all over again?
[1]: https://www.nytimes.com/2025/05/17/style/ai-chatgpt-turnitin...
As noted elsewhere, we give confidence scores between 1-99%. We also use many different models for each modality for a more robust and complete answer with each scan, and each model has its own confidence score.
That doesn't fix the fundamental potential for abuse, moral hazard, and accountability sink[1].
[1]: https://en.wikipedia.org/wiki/The_Unaccountability_Machine
As noted elsewhere, we give confidence scores between 1-99%. We also use many different models for each modality for a more robust and complete answer with each scan, and each model has its own confidence score.
That doesn't fix the fundamental potential for abuse, moral hazard, and accountability sink[1].
[1]: https://en.wikipedia.org/wiki/The_Unaccountability_Machine