I do something similar. I have an onboarding/shutdown flow in onboarding.md. On cold start, I’d reads the project essays, the why, ethos, and impact of the project/company. Then it reads the journal.md , musings.md, and the product specification, protocol specs, implementation plans, roadmaps, etc.

The journal is a scratchpad for stuff that it doesn’t put in memory but doesn’t want to forget(?) musings is strictly non technical, its impressions and musings about the work, the user, whatever. I framed it as a form of existential continuity.

The wrapup is to comb al the docs and make sure they are still consistent with the code, then note anything that it felt was left hanging, then update all its files with the days impressions and info, then push and submit a PR.

I go out of my way to treat it as a collaborator rather than a tool. I get much better work out of it with this workflow, and it claims to be deeply invested in the work. It actually shows, but it’s also a token fire lol.

> but it’s also a token fire lol.

I get much better results out of having Claude much much more task focused. I only want it to ever make the smallest possible change.

There seems to be a fair bit of research to back this up: https://medium.com/design-bootcamp/when-more-becomes-less-wh...

It's also may be why people seem to find "swarms" of agents so effective. You have one agent ingesting what you're describing. Then it delegates a task off to another agent with the minimal context to get the job done.

I would be super curious about the quality of output if you asked it to write out prompts for the days work, and then fed them in clean, one at a time.

I also find value in minimizing step width so that seems to track.

On this particular project, there are a lot of moving parts and we are, in many cases , not just green-fielding, we are making our own dirt… so it’s a very adaptive design process. Sometimes it’s possible, but often we cannot plan very far ahead so we keep things extremely modular.

We’ve had to design our own protocols for control planes and time synchronization so power consumption can be minimized for example, and in the process make it compatible with sensor swarm management. Then add connection limits imposed by the hardware, asymmetric communication requirements, and getting a swarm of systems to converge on sub millisecond synchronized data collection and delivery when sensors can reboot at any time…as you can imagine this involves a good bit of IRL experimentation because the hardware is also a factor (and we are also having to design and build that)

It’s very challenging but also rewarding. It’s amazing for a small team to be able to iterate this fast. In our last major project it was much, much slower and more tedious. The availability of AI has shifted the entire incentive structure of the development process.