The author makes a point that you should redo every manual commit with AI to align you mental model of actions with how models work. This is something that I’m going to need to try. It’s related to my desire to reduce things like “discovery tax” (the phenomenon whereby a 5 minute agent task is 4 minutes of environment exploration and 1 minute of execution) and makes sure that models get things right the first time around, however, my AI improvement plan didn’t really account for how to improve the model in cases where I ended up manually resolving issues or implementing features.
Some arguments are made about retaining focus and single-mindedness while working on AI. I think these points are important. It’s related to the article on cutting out over-eager orchestration and focusing on validation work (https://sibylline.dev/articles/2026-01-27-stop-orchestrating...). There are a few sides to this covered in the article. You should always have high value task to switch to when the agent is working (instead of scrolling tiktok, instagram,X, youtube, facebook, hackernews .etc). In my case I might try start to read some books that I have on the backburner like Ghost in the Wires. You should disable agent notifications and take control of when you return to check the model context to be less ADHD ridden when programming with agents and actually make meaningful progress on the side task since you only context switch when you are satisfied. The final one is to always have at least one agent and preferably only one agent running in the background. The idea is that always having an agent results in a slow burn of productivity improvements and a process where you can slowly improve the background agent performance. Generally, always having some agent running is a good way to stay on top of what current model capabilities are.
I also really liked the idea of overnight agents for library research, redevelopment of projects to test out new skills, tests and AGENTS.md modifications.