Just tried this. Holy fuck.
I'd take an army of high-school graduate LLMs to build my agentic applications over a couple of genius LLMs any day.
This is a whole new paradigm of AI.
Just tried this. Holy fuck.
I'd take an army of high-school graduate LLMs to build my agentic applications over a couple of genius LLMs any day.
This is a whole new paradigm of AI.
A billion stupid LLMs don't make a smart one, they just make one stupid LLM that's really fast at stupidity.
I think maybe there are subsets of problems where you can have either a human or a smart LLM write a verifier (e.g. a property-based test?) and a performance measurement and let the dumb models generate candidates iterate on candidates?
Yeah, maybe, but then it would make much more sense to run a big model than hope one of the small ones randomly stumbles upon the solution, just because the possibility space is so much larger than the number of dumb LLMs you can run.
I don't work this way, so this is all a hypothetical to me, but the possibility space is larger than _any_ model can handle; models are effectively applying a really complex prior over a giant combinatorial space. I think the idea behind a swarm of small models (probably with higher temperature?) on a well-defined problem is akin to e.g. multi-chain MCMC.
What did you try and how?
https://chatjimmy.ai
I see; the chatbot demo in the Taalas page. If they could produce this cost effectively it would definitely be valuable. The only challenge would be getting models to market before their next revision.
Man, I'm in the exact opposite camp. 1 smart model beats 1000 chaos monkeys any day of the week.
When that genrates 10k of output slop in less latency than my web server doing some crud shit....amazing!