There's nothing general about AI-as-CEO.

That's the opposite of generality. It may well be the opposite of intelligence.

An intelligent system/individual reliably and efficiently produces competent, desirable, novel outcomes in some domain, avoiding failures that are incompetent, non-novel, and self-harming.

Traditional computing is very good at this for a tiny range of problems. You get efficient, very fast, accurate, repeatable automation for a certain small set of operation types. You don't get invention or novelty.

AGI will scale this reliably across all domains - business, law, politics, the arts, philosophy, economics, all kinds of engineering, human relationships. And others. With novelty.

LLMs are clearly a long way from this. They're unreliable, they're not good at novelty, and a lot of what they do isn't desirable.

They're barely in sight of human levels of achievement - not a high bar.

The current state of LLMs tells us more about how little we expect from human intelligence than about what AGI could be capable of.