I tend to have three layers of "rulesets", one general one I reuse across almost any coding task (https://gist.github.com/victorb/1fe62fe7b80a64fc5b446f82d313...), then language specific ones, and finally project specific ones. Concat them before injecting into the system prompt.
The one thing I would really recommend adding to your constraints is to Don't Repeat Yourself - always check if something already exists. LLMs like to duplicate functionality, even if it's included in their context.
Can I ask why you have asked it to avoid abstractions? My experience has been that the old rules, such as avoid premature abstraction or premature optimization, don't apply as cleanly because of how ephemeral and easy to write the actual code is. I now ask the LLM to anticipate the space of future features and design modular abstractions that maximize extensibility.
> Can I ask why you have asked it to avoid abstractions?
Some models like to add abstractions regardless of their usefulness (Google's models seems excessively prone to this for some reason), so ended up having to prompt it away so it lets me come up with whatever abstractions are needed. The rules in that gist is basically just my own coding guidelines put in a way that LLMs can understand them, when I program "manually" I program pretty much that way.
I have yet to find any model that can properly plan feature implementations or come up with designs that are proper, including abstractions, so that's something I do myself at least for now, the system prompts mostly reflect that workflow too.
> because of how ephemeral and easy to write the actual code is
The code I produce isn't ephemeral by any measure I understand that word, anything I end up using stays where it is until it gets modified. I'm not doing "vibe coding" which it seems you're doing, might need some different prompts for that.
Yup. I've watched both Claude and especially Gemini get frustrated trying to deal with my pre-commit checks (usually mypy) and deciding to do `git commit -n` even though my rules tell explicitly, multiple times, that it's never okay to bypass the pre-commit checks.
I know you are joking, but I have them injected into the tools they use, they run automatically every time they run commands to write, update etc. I can configure those to block the file edits completely, or just as feedback every time after. This is restricted outside of codebase, but of course they could find a loophole to hack the whole thing though o they could just get frustrated and run a recursive loop script that would crash my computer :)
Is this a shareable ruleset? I would completely understand if not but I’m interested in learning new ways to interact with my tools.
I tend to have three layers of "rulesets", one general one I reuse across almost any coding task (https://gist.github.com/victorb/1fe62fe7b80a64fc5b446f82d313...), then language specific ones, and finally project specific ones. Concat them before injecting into the system prompt.
Second this method.
The one thing I would really recommend adding to your constraints is to Don't Repeat Yourself - always check if something already exists. LLMs like to duplicate functionality, even if it's included in their context.
Can I ask why you have asked it to avoid abstractions? My experience has been that the old rules, such as avoid premature abstraction or premature optimization, don't apply as cleanly because of how ephemeral and easy to write the actual code is. I now ask the LLM to anticipate the space of future features and design modular abstractions that maximize extensibility.
> Can I ask why you have asked it to avoid abstractions?
Some models like to add abstractions regardless of their usefulness (Google's models seems excessively prone to this for some reason), so ended up having to prompt it away so it lets me come up with whatever abstractions are needed. The rules in that gist is basically just my own coding guidelines put in a way that LLMs can understand them, when I program "manually" I program pretty much that way.
I have yet to find any model that can properly plan feature implementations or come up with designs that are proper, including abstractions, so that's something I do myself at least for now, the system prompts mostly reflect that workflow too.
> because of how ephemeral and easy to write the actual code is
The code I produce isn't ephemeral by any measure I understand that word, anything I end up using stays where it is until it gets modified. I'm not doing "vibe coding" which it seems you're doing, might need some different prompts for that.
Until the agent disables the linter rule without you noticing!
Yup. I've watched both Claude and especially Gemini get frustrated trying to deal with my pre-commit checks (usually mypy) and deciding to do `git commit -n` even though my rules tell explicitly, multiple times, that it's never okay to bypass the pre-commit checks.
I know you are joking, but I have them injected into the tools they use, they run automatically every time they run commands to write, update etc. I can configure those to block the file edits completely, or just as feedback every time after. This is restricted outside of codebase, but of course they could find a loophole to hack the whole thing though o they could just get frustrated and run a recursive loop script that would crash my computer :)