>These tools are not perfect, but they're right more often than they're wrong.
Based on what in particular? The only time I have used them is to have a laugh.
>These tools are not perfect, but they're right more often than they're wrong.
Based on what in particular? The only time I have used them is to have a laugh.
Based on experience, including a good number of experiments I've done with known-LLM output and contemporary, known-human text. Try them for real and be surprised. Some of the good, state-of-the-art tools include originality.ai and Pangram.
A lot of people on HN have preconceived notions here based on stories they read about someone being unfairly accused of plagiarism or people deliberately triggering failure modes in these programs, and that's basically like dismissing the potential of LLMs because you read they suggested putting glue on a pizza once.
I had fun with AI detectors in particular for images, even the best one (Hive in my opinion) was failing miserably with my tests, maybe the one trained on text are better but I find it hard to trust them, in particular if someone know how to fiddle with them.