I don’t think we will though. Because the “short game” is match the requirements of the agent operator. If we don’t care about the finer details that we let the LLMs infer, then we shouldn’t care if a human infers them (but yet we do).
I think LLMs are great, and I think people who can use them to get to the green in one and take it from there will soar, just like people who could identify a problem and solve it themselves did in the past.