That’s true. The demo I showed was somewhat cherry-picked, and agentic systems themselves inherently introduce uncertainty. To address this, a possible approach was proposed earlier in this thread: currently, after /teach is completed, we have an interactive discussion to refine the learned skill. In practice, this could likely be improved when the agent uses a learned skill and encounters errors, it could proactively request human help to point out the mistake. I think this could be an effective direction.