Feels to me like the toolchain for using LLMs in various tasks is still in flux (i interpret all of this as "stuff in different places like .md or skills or elsewhere that is appended to the context window" (i hope that is correct)). Shouldnt this overall process be standardized/automated? That is, use some self-reflection to figure out patterns that are then dumped into the optimal place, like a .md file or a skill?

The entire tooling ecosystem is in flux.

Looking forward, the future is ad-hoc disposable software that once would take a large team a dozen sprints to release.

Eventually it'll be use case -> spec -> validation -> result.

The tv show Stargate showed different controls that scientifically calculated and operated starships so all the operator had to do was point the controls in the direction of the destination. The ai/computer/hardware knows how to get to the result and that result is human driven.

I have evidence of this at work and in my own life with the key component being the tooling integration.

too early for standardization. resist the urge. Let a bunch of ideas flow, then watch the Darwinian process of the best setup will be found. Then standardize.