Hi HN, we're Manik, Manoj and Harshith, and we're building CORE (https://github.com/RedPlanetHQ/core), an open source AI butler that acts and clears out your backlog.
Write `[ ] Fix the search auth bug` in a scratchpad. Three minutes later, without you at the keyboard, CORE picks it up, pulls the relevant context from your codebase, drafts a plan in the task description, and spins up a Claude Code session in the background to do the work. You review the output in the task chat and unblock it when it gets stuck.
Every AI tool today is reactive. You open a chat, brief the agent, it responds. Before anything moves, you've already done the real work: opened the Sentry error, found the commit, read the Slack thread, grabbed the Linear ticket, and stitched it all together into a prompt. The model isn't the bottleneck. You are.
Demo Video: https://www.youtube.com/watch?v=PFk4RJvQg1Y
CORE removes you from that loop. The interface is a shared scratchpad, think a page you and a colleague both have open. You write what's on your mind. When you write a checkbox line like `[ ] Fix the search bug`, CORE converts it into a task and starts working on it after a short delay (long enough for you to add context if you want to). No prompt template. No workflow to configure.
The reason it can do this without you re-explaining everything: CORE keeps a persistent memory built from your tasks, conversations, and connected apps (Linear, Gmail, GitHub, Slack etc.). When it spins up a Claude Code session, it arrives with your codebase and project context already loaded.
A real example: we wrote `[ ] Create a widget in Linear integration`, about 14 minutes later, CORE had opened a PR .
What CORE is _not_: it's not Devin (no autonomous web browsing or shell loops you can't see), and it's not "Claude Code with memory bolted on." It's the layer above it that decides what should run, gathers the context, hands it to the right agent, and keeps the receipts in one place. Today the agent backend it spins up most often is Claude Code; the orchestration, scratchpad, memory, and integrations are CORE.
Open source, self-hostable with `docker compose up` and it supports multiple models.
GitHub: https://github.com/RedPlanetHQ/core Website: https://getcore.me (you can chat with Harshith's butler there) Demo: https://www.youtube.com/watch?v=PFk4RJvQg1Y