> 'm arguing that there's a skill that has to be learned in order to break through this. As you start in a new code base, you should be quick to jump in when you hit that 20%. But, as you spend more time in it, you learn how to avoid the same "context hell" issues and move that number down to 15%, 10%, 5% of the time.
The problem is that you're learning a skill that will need refinement each time you switch to a new model. You will redo some of this learning on each new model you use.
This actually might not be a problem anyway, as all the models seem to be converging asymptotically towards "programming".
The better they do on the programming benchmarks, the further away from AGI they get.