The best way to prompt an LLM is to describe the outcome you want, that's it. They are trained as task completers. A clear outcome is way better than a process.
If the LLM fails, either you didn't describe your outcome sufficiently or is misinterpreted what you said or it couldn't do it (rare).
Common errors should be encoded as context for future similar tasks, don't bloat skills with stuff that isn't shown to be necessary.
> The best way to prompt an LLM is to describe the outcome you want, that's it. They are trained as task completers. A clear outcome is way better than a process.
This is not true for anything complex. They’re instruction followers, of which task completion is just one facet.
They’re also extremely eager to complete tasks without enough information, and do it wrongly. In the case of just describing task completion, despite your best efforts, there are always some oversights or things you didn’t even realize were underspecified.
So it helps a lot to add some process around it, eg “look up relevant project conventions and information. think through how to complete the task. ask me clarifying questions to resolve ambiguities. blah blah”. This type of prompt will also help with the new Opus 4.7 adaptive thinking to ensure it thinks through the task properly.
Agreed, and further, I'd argue the OP's division of LLM instructions into either process or outcome specification is a false dichotomy. My agentic process specification is about automatically specifying the outcomes that I would otherwise repeatedly have to tell the LLM to consider, like making sure test coverage is maintained, or that decisions are documented on the original Github issue. Or it's about correcting common failure modes, like when the agent spends an enormous amount of time running repo-wide tests while debugging a focused change, because the agent doesn't consistently optimize around the time-to-implement as an outcome. Arguably part of addressing those failure modes boils down to pure process in the sense that I specify a logical order for achieving the outcomes, e.g. creating a plan before implementing. But that is mostly to organize approval gates for my convenience, rather than structuring the agent's work per se.
If there is anything we have learned in decades of Software engineering, it's "A clear outcome" is not easy to describe. In many cases, it's impossible unless people from 4 different domains collaborate. That's why process matters. It allows for software to be built is a "semi-standardized" way that can allow iterations to get us closed towards the expected outcome, that might emerge over time.
Yes, not everything I use LLMs for going to have the same level of ambiguity or complex requirements. Optimizing by choosing to skip over parts of the process is exactly Addy is talking in this article.
This seems like common sense but it does not work in practice.
Prompting is just the first part. To get the outcome, you need to have other systems to steer the agent as it get things wrong. Proper deterministic tests work. But there is also stuff that need to happen during the LLM execution like cyclic detection etc. All of this adds up.
You cannot just prompt an LLM an hope for a good outcome. It might work in small isolated scenarios but it just does not work consistently enough to call it reliable.
Without further guardrails enforced by the process or the harness, LLMs do not have sufficient capabilities to complete a task up to a certain standard.
I agree that many skills are overblown and unnecessary. But there's a lot of value in giving AI the right process. See how much more effective Claude can be for moderate or large changes when using the superpowers skill.
Sometimes people don't know what they want.
I prefer the start small and iterate approach to arrive at a result.
Then I ask it to summarize. Sometimes after that I ask it to generalize.
a skill is just reusuable/shareable context. It's just text, really. It's useful for things like documentation on how to use an API (this works better than MCP in my opinion), or a non consensus way of doing something. For example, you can use remotion to generate video. There are useful remotion skills that allow you to reliably generate specific types of videos. Captions of a certain style, for example.
That seems a bit reductive. Even with humans, there’s a range of interpretations and ways that something can be built or a task completed. Engineers remember stuff so you don’t have to keep repeating yourself. Skills are a way to describe your outcome without similar repetition.
[dead]