For almost two years, we've been developing Charlie, a coding agent that is autonomous, cloud-based, and focused primarily on TypeScript development. During that time, the explosion in growth and development of LLMs and agents has surpassed even our initially very bullish prognosis. When we started Charlie, we were one of the only teams we knew fully relying on agents to build all of our code. We all know how that has gone — the world has caught up, but working with agents hasn't been all kittens and rainbows, especially for fast moving teams.
The one thing we've noticed over the last 3 months is that the more you use agents, the more work they create. Dozens of pull requests means older code gets out of date quickly. Documentation drifts. Dependencies become stale. Developers are so focused on pushing out new code that this crucial work falls through the cracks. That's why we pivoted away from agents and invented what we think is the necessary next step for AI powered software development.
Today, we're introducing Daemons: a new product category built for teams dealing with operational drag from agent-created output. Named after the familiar background processes from Linux, Daemons are added to your codebase by adding an .md file to your repo, and run in a set-it-and-forget-it way that will make your lives easier and accelerate any project. For teams that use Claude, Codex, Cursor, Cline, or any other agent, we think you'll really enjoy what Daemons bring to the table.
Looks really interesting -- quick question though: how does this differ from hooks (e.g., https://code.claude.com/docs/en/hooks)?
Looks more similar to routines for me (just launched the other day): https://code.claude.com/docs/en/routines
simonw is right, daemons are closer to routines.
compared to routines:
- daemons are specified by a DAEMON.md file in the repo (like skills). it's version-controlled and team-owned, not hidden in a dashboard or linked to a single developers account.
- daemons have a specialized event pipeline that joins similar webhooks events into a single daemon activation and can inject late arriving events into a daemon that's already running (this is key to avoid duplicate work and noisy actions).
- the watch conditions are a more powerful activation method because they use semantic matching and can be mixed with cron schedules.
- daemons have access to the logs from their past runs (and soon proper memory) so they can learn from their own mistakes.
The drift detection angle is interesting. I'd be curious how you handle cases where two daemons touch related files — is there a way to declare ordering constraints in the .md file, or do they run in isolated branches?
Each daemon runs in its own isolate, but the output is typically shared state; eg multiple daemons contribute to the same PR from separate container runtimes.
It’s possible to make naive daemons that stomp on each other (as with a UNIX daemon), but they’re highly responsive to coordination instructions and generally do very well at additive rather than competitive contribution.
I feel like I must have missed something important, but I don't feel like I skipped anything.
It seems like everything is telling me to talk to Charlie to get setup. _How_ do I talk with Charlie?
I think I'm supposed to do that on an issue maybe? I'll try that.
Yes, once you've connected your GitHub (or Linear) then an issue is a good place to start talking to Charlie. Slack is good as well, but we typically do our meaty work through issues internally, since the conversation often evolves and Slack becomes a bit crowded for in-depth discussions.
here are a few more resources:
- example daemon files: https://github.com/charlie-labs/daemons
- reference docs: https://docs.charlielabs.ai/daemons
happy to answer questions. all feedback appreciated.
Are other daemons coming and/or will you accept user generated ones?
yes and yes. in the meantime, there's a list of use cases to start from here: https://docs.charlielabs.ai/daemons/choosing-daemons
Looks pretty interesting, will try it out and give you feedback! keep up the good work.
How would this work? One would connect it's repository to a cloud platform that would then act based on the existing daemons of the repo?
That's exactly right. Our cloud-based agent Charlie (https://charlielabs.ai/) supports this, and our hope is that other platform providers will offer support in the future as well.
Skills live in the repository, so it felt like a natural complement. It also lets other developers see what the active daemons are and collaborate on them. With proper context, agents are quite good at writing and editing these daemon files too.
DAEMONS.md is meant to be an open spec, like skills.
there's details on how other platforms can support it here: https://ai-daemons.com/spec/#provider-guide
the hook model is event-driven - something happens, hook fires. daemons sound like they're proposing a different mental model where you have persistent processes that observe and react. the difference is the same as cron vs a running service. both work but the daemon approach makes sense when you need stateful observation across multiple events rather than just per-action triggers
How does this compare to OpenProse, it looks similar? https://openprose.ai/
Are the two competitive or additive?
hadn't seen this before, but it looks like the daemon schedules and watch conditions could be helpful for activating openprose contracts.
The schedule is cute.
"Complete non-determinism for everything except the schedule it runs at."
Why couldn't these just be callable skills?
Callable skills can’t activate on a schedule or listen for events. Making a daemon which invokes other callable skills is a great use case!
I’m an eng on the team that built this, in full disclosure.
I do really like the idea.
But pardon my ignorance, but one could quite easily roll this themselves? Script the hooks and fire off a headless agent with a hook specific prompt.
Very fair question.
One could build a simple version of this easily - e.g. setup an endpoint that listens for the particular event you are concerned with, and fire off the headless agent with your hook specific prompt - but the amount of work involved to listen for that particular event while filtering out noise and orchestrating the task is actually not trivial.
Plus, that involves writing a lot of code. It's really magical to express all of this in natural language.
For example, this is the YAML frontmatter for a a daemon that keeps a GitHub PR in a mergeable state in the event of CI failures or branch base changes.
Note the lack of any code or required knowledge of GitHub webhooks.Daemons are autonomous. From the site:
> Daemons are self-initiated — they observe the environment, detect drift, and act without a prompt.
[dead]
[dead]
[dead]