> Instead of just asking, “Can we train a model to solve X?”, we’re asking, “What should we be training AI to do, and how do we measure real progress?”

To say we are at a point where AI can do anything reliably is laughable, it can do much and it will tell you any answer whether right or wrong with full confidence. To trust such a technology in the big no-human decisions like we want it to is foolswork.