That’s a broken analogy. An intern and a llm have completely different failure modes. An intern has some understanding of their limits and the llm just doesn’t. The thing, that looks remarkably human, will make mistakes in ways no human would. That’s where the danger lies: we see the human-like thing be better at things difficult for humans and assume them to be better across the board. That is not the case.