I think many people are now learning that their definition of intelligence was actually not very precise.

From what I've seen, in response to that, goalposts are then often moved in the way that requires least updating of somebody's political, societal, metaphysical etc. worldview. (This also includes updates in favor of "this will definitely achieve AGI soon", fwiw.)

I remember when the goal posts were set at the "Turing test."

That's certainly not coming back.

If you know the tricks wont you be able to figure out if some chat is done by a LLM?