You can’t just ask AI to dump, you need to vaguely describe what design elements you think are important, like for SQL, you might want to plan our your CTEs first, then come up with a strategy for implementing each one, before getting to the SQL file itself (and of course tests, but that is a separate line of artifacts, you don’t want the AI to look at the tests when updating code, because you want to avoid letting AI code to the test). You can also look at where the AI having trouble doing something, or not doing it very well, and ask it to write documentation that will help it do that more successfully.

I can’t imagine asking AI to change some code without having a description of what the code does. You could maybe reverse engineer that, but that would basically be generating the documents after the fact. Likewise changing code without tests, where failing tests are actionable signals for the AI to make sure it doesn’t break things on update. Some people here think you can just ask it to write code without any other artifacts, thats nuts (maybe agentic will develop in the direction where AI writes persistent artifacts on its own without being told to do so, actually I’m sure that will happen eventually).

> You can’t just ask AI to dump, you need to vaguely describe what design elements you think are important

Right. And that’s what I’ve tried to do but I am not confident it’s captured the most critical info in an efficient way.

> I can’t imagine asking AI to change some code without having a description of what the code does. You could maybe reverse engineer that, but that would basically be generating the documents after the fact.

This is exactly how I’ve been using AI so far. I tell it to deeply analyze the code before starting and it burns huge amounts of tokens relearning the same things it learned last time. I want to get some docs in place to minimize this. That’s why I’m interested in what a subagent would respond with because that’s what it’s operating with usually. Or maybe the compressed context might be an interesting reference.

You can save the analysis and those are your docs. But your workflow has to maintain them in sync with the code.

I have no idea about token cost working for a FAANG, it’s a blind spot for me. One of these days I’m going to try to get QWen coder going for some personal projects on my M3 Max (I can run 30b or even 80b heavily quantized), and see if I can get something going that’s thrifty with the resources provided by a local LLM.

I’m not actually paying for tokens. Just trying to be a good citizen. And also trying to figure out how to set everyone in my organization up to do the same.

Interestingly while playing with Claude Code I just learned that /init actually does analyze and record findings.