I don't find this surprising. We are constantly finding workarounds for technical limitations, then ditch them when the limitation no longer exists. We will probably be saying the same thing for LLMs in a few years (when a new machine learning related TLA becomes the hype).
100%. The speed of change is wild. With each new model, we end up deleting thousands of lines of code (old scaffolding we built to patch the models’ failures.)