I see AI pass the turning test all the time, since humans are constantly falsely being accused of being an AI.

It doesn't mean that AI got good, just that humans are thinking other humans are AI, which is a form of passing the test.

The adversarial version with humans involved is actually easier to pass because of this - because real actual humans wouldn't pass your non adversarial version.

I've seen a fair number of cases where someone swears up and down not to be using AI to generate responses, but there's no good reason to believe it (except perhaps specifically for the messages where that claim is made).

This includes times that someone basically disappeared from e.g. Stack Overflow at some point before the release of ChatGPT, having written a bunch of posts that barely demonstrate functional literacy or comprehension of English; and then came back afterward posting long messages with impeccable grammar and spelling in textbook "LLM house style".

There are a ton of people like that, but the LLM house style also exists because a ton of people write that way too.

The people falsely accused because they've used em-dashes for 20 years aren't the ones that were functionally illiterate before.

I don't think there's any definitive way to check, but for me one of the biggest tells that a long piece of writing was LLM generated is that it will hardly say anything given how many words are in it.

(well that and the "it's not just x, it's y!" pattern they seem to love)

I think em-dashes were uncommon mainly because they're not always convenient to type.

In one study, GPT-4.5 was judged to be human 73% of the time, which means that the actual human was judged to be human only 27% of the time. More human than human, as Tyrell would say.

Edit: folks, the standard Turing test involves a computer and a human, and then a judge communicating with both and giving a verdict about which one is the human. The percentages for the two entities being judged will add up to exactly 100%. That's how this test was conducted. Please don't assume I'm a moron.

The implication would be that GPT-4.5 was not judged to be human 27% of the time. You can't determine how often humans were judged correctly as humans from that data point.

The structure of the test was that there was one human and one AI conversation partner, and the rater had to choose which one was which.

Given that structure, you can judge from that data point.

That was also before the crazy AI hysteria we have today with the em-dash police everywhere.

For the test to be free of bias, we’ll have to ensure all the humans are from Nigeria.

Those stats dont necessarily line up that way. Do you have a link?

Given the way the test was structured it does line up.

https://arxiv.org/abs/2503.23674

Surprisingly good. I wonder how they would have done without the 5 minute limit on conversations (average of 8 messages per convo per the study)