you'd be able to give them a novel problem and have them generalize from known concepts to solve it. here's an example:

1 write a specification for a language in natural language

2 write an example program

can you feed 1 into a model and have it produce a compiler for 2 that works as reliably as a classically built one?

I think that's a low bar that hasn't been approached yet. until then I don't see evidence of language models' ability to reason.

I'd accept that as a human kind of intelligence, but I'm really hoping that AGI would be a bit more general. That clever human thinking would be a subset of what it could do.

You could ask Gemini 2.5 to do that today and it's well within its capabilities, just as long as you also let it write and run unit tests, as a human developer would.