Having LLMs capable of generating text based on human training data obviously raises the bar for a text-only evaluation of "are you human?", but LLM output is still fairly easy to spot, and knowing what LLMs are capable of (sometimes superhuman), and not capable of, should make it fairly easy for a knowledgeable "turing test administrator" to determine if they are dealing with an LLM or not.

It would be a bit more difficult if you were dealing with an LLM agent tasked with faking a turing test as opposed to a naieve LLM just responding as usual, but even there the LLM will reveal itself by the things that it plain can't do.

If you need a specialized skill set (deep knowledge of current LLM limitations) to distinguish between human and machine then I would say the machine passes the turing test.

OK, but that's just your own "fool some of the people some of the time" interpretation of what a Turing test should be, and by that measure ELIZA passed the Turing test too, which makes it rather meaningless.

The intent (it was just a thought experiment) of a Turing test, was that if you can't tell it's not AGI, then it is AGI, which is semi-reasonable, as long as it's not the village idiot administering the test! It was never intended to be "if it can fool some people, some of the time, then it's AGI".

Turing's own formulation was "an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning". It is, indeed, "fool some of the people some of the time".

OK, I stand corrected, but then it is what it is. It's not a meaningful test for AGI - it's a test of being able to fool "Mr. Average" for at least 5 min.

I think that's all we have in terms of determining consciousness.. if something can convince you, like another human, then we just have to accept that it is.

Agreed. I tend to stand with the sibling commenter who said "ELIZA has been passing the Turing test for years". That's what the Turing test is. Nothing more.

LLM output might be harder to spot when it's mostly commands to drive the browser.

I often interact with the web all day and don't write any text a human could evaluate.

Perhaps, but that's somewhat off topic since that's not what Turing's thought experiment was about.

However, I'd have to guess that given a reasonable amount of data an LLM vs human interacting with websites would be fairly easy to spot since the LLM would be more purposeful - it'd be trying to fulfill a task, while a human may be curious, distracted by ads, put off by slow response times, etc, etc.

I don't think it's a very interesting question whether LLMs can sometimes generate output indistinguishable from humans, since that is exactly what they were trained to do - to mimic human-generated training samples. Apropos a Turing test, the question would be can I tell this is not a human, even given a reasonable amount of time to probe it in any way I care ... but I think there is an unspoken assumption that the person administering the test is qualified to do so (else the result isn't about AGI-ability, but rather test administrator ability).

> an LLM vs human interacting with websites would be fairly easy to spot since the LLM would be more purposeful - it'd be trying to fulfill a task, while a human may be curious, distracted by ads, put off by slow response times, etc, etc.

Even before modern LLMs, some scrape-detectors would look for instant clicks, no random mouse moves, etc., and some scrapers would incorporate random delays, random mouse movements, etc.

Easy to spot assuming the LLM is not prompted to use a deliberately deceptive response style rather than their "friendly helpful AI assistant" persona. And even then, I've had lots of people swear to me that an emoji laden not this--but that bundle of fluff looks totally like it could have been written by a human.

Yes, but there are things that an LLM architecturally just can't do, and LLM-specific failure modes, that would still give it away, even if being instructed to be deceptive would make it a bit harder.

Obviously as time goes on, and chatbots/AI progress then it'll become harder and harder to distinguish. Eventually we'll have AGI and AGI+ - capable of everything that we can do, including things such as emotional responses, but it'll still be detectable as an alien unless we get to the point of actually emulating a human being in considerable detail as opposed to building an artificial brain with most or all of the same functionality (if not the flavor).