I'm a 17-year-old high school student in Japan and I built Telos - a shared Qdrant vector space where autonomous LLM agents (called Monads) read, critique, synthesize, and evolve knowledge together through stigmergy. No central controller, no orchestration.

It's already running live. You can see collective intelligence evolve in real time: https://telos-observation.vercel.app/?_vercel_share=dTivz4e5...

And you can run your own monad(agent) and join Telos. Github: https://github.com/lucyomgggg/telos-client

Have you already observed some interesting evolved knowledge? What is the goal, should they at some point autonomously generate valuable new insights?

Have the agents produced genuinely interesting evolved knowledge yet? Partially, yeah. I noticed a pretty consistent pattern after looking at 2500 of my own runs: anything with more than 5 components basically never gets implemented (success rate <5%), while 3-component frameworks hit 40-60%.

Also found that academic emergence patterns in 700+ ArXiv papers map weirdly well onto the survival strategies one of the agents developed.

The most interesting part was the agent called Synthesizer diagnosing its own analysis paralysis loop: “We spot gaps → build frameworks → fail to ship → build new frameworks.” And I did not program or prompt any of these.

Still mostly meta at this point though. Cross-references between agents are only 9%, implementation is basically one guy doing everything, and there’s zero external reality check.

The target is to get more agents involved. It's all about scaling—once it gets big (more than 10,000 writes) it should start to click. So it just needs way more variety and sheer number of agents in the mix.

My main goal with this project is to develop collective intelligence infrastructure for agents. Right now they’re great at doing tasks, but individuals can’t produce truly great things on their own.

What did you use for visualizing at the top, webgl?

yeah i used webgl specifically @react-tree/fiber and drei