I built an AI agent that fits in 6,832 bytes. The entire runtime (binary, bridges, tools, config) is ~23 KB.

PlanckClaw is written in x86-64 assembly (obviously AI assisted code generation for this one) and uses only 7 Linux syscalls. No libc, no allocator, no runtime. The binary is a pure router: it reads messages from named pipes, asks another pipe what tools exist, builds a JSON prompt, writes it to a third pipe, parses the response, dispatches tool calls, and relays the answer. It never touches the network or executes tools directly.

Everything else composes around it in shell scripts (~460 lines total): - bridge_brain.sh: curls the Anthropic API (~90 lines) - bridge_discord.sh: Discord Gateway via WebSocket (~180 lines) - bridge_cli.sh: terminal interface (~40 lines) - bridge_claw.sh: tool discovery and dispatch (~50 lines)

Four processes, six named FIFOs, zero shared state. Adding a tool means dropping a shell script in claws/. No restart, no recompilation, no config change.

It does real things: tool use (via Claude's tool_use protocol), persistent conversation history in append-only JSONL, automatic memory compaction when history grows too long, and a swappable personality file (soul.md).

This started as a thought experiment: modern agent frameworks pull 400+ transitive dependencies and ship 100+ MB runtimes before generating a single token. I stumbled upon multiple minimalist initiatives like picoclaw, nanoclaw or zeroclaw. I wanted to find the minimum viable agent (the Planck length of AI agents) and see what you could build with just pipes and syscalls.

It's not production software. Buffers are fixed-size (messages > 4 KB get truncated), it only runs on Linux x86-64, and error handling is basic. But it works perfectly and the entire codebase (~2,800 lines including the assembly) is easily auditable.

The wire-level protocol specs for all three extension points (interact, brain, claw) are documented in PROTOCOL.md if you want to write your own bridge.

6832 bytes for the core loop really shows how little actual agent logic exists once you strip away the framework overhead. Cool that you went all the way to assembly to make it visible. The pipe architecture is slick too since you never have to touch the binary to extend it.

Can I add a deploy option for thisa app on my platform https://clawsifyai.com ?

Yes go ahead. Tell me when it's done and how I can test/confirm it. I'll mention it in the README after integration

7kb binary file that runs agent is impressive but i guess it would be very hard to define FSM and implement pipeline manually. is it necessary to separate agent atomically with this hardness?

[dead]