These videos are worth a watch. There are tons of impressive moments, but they had me at the very first one where a woman says: "I'm going to tell you a story," and then pauses for a long, luxurious sip from a cup of coffee, and the model ... does nothing, just waits. Take my money.
Speaking of taking my money, what's the economic model for a company like this? They've published a fair amount about their architecture - enough that I imagine frontier labs could implement. Patents? Trade secrets? It's hard for me to understand how you'd be able to beat that training compute and knowhow at Anthropic/GOOG/oAI/Meta without some sort of legal protection.
I can't wait to see what these model architectures do with like 30-40% lower latency and more model intelligence. Very appealing. For reference, these look to be roughly 1/10 the size of Opus 4.7 / GPT 5.x series -- 275B, 12B active. So there's lots of room to add intelligence, and lots of hope that we could see lower latency.
> They've published a fair amount about their architecture - enough that I imagine frontier labs could implement.
i think the real ones know this is the tip of the iceberg? hparam tuning, data recipes, data collection, custom kernels, rl/eval infra, all immensely deep topics that would condense multiple decades of phd lifetimes to produce SOTA performance (in both senses of the word) like this.
i would also calibrate what you are impressed by. simply waiting is a posttrain thing - the fact that gemini and oai have not prioritized it is not something you should overindex on as hard. what they showed with full duplex is technically far far harder to achieve
I agree that full duplex is the amazing bit. For instance, the three engineers shouting trivia questions while a timer is running — that’s extremely novel as far as I can tell.
I’d like to believe from the demos that this ability to wait kind of falls out of the model as an emergent property — perhaps coming out of a small RL loop - rather than a specific behavior trained, a-la a VAD component in a stack. Either way, I would guess that VAD absolutely cannot do this right now — interruptions are highly annoying on all voice interaction experiences, and if it were a simple matter of better post training, SOMEONE would have done this, e.g. elevenlabs.
But, I disagree on your idea that this is too expensive/too hard to replicate. For me, yes. But, there’s an existence proof — a small team at a new company just did this without a real roadmap, certainly for less than $1b dollars and probably in less than two years. They are almost certainly less skilled at your list of needs to replicate than teams at the frontier labs, who have been given a roadmap.. So I don’t think it’s as difficult as you propose, from an organizational skills perspective.
In China it's become well known that promising new companies will get an offer from either Alibaba or Tencent. In the US, it's probably simmilar. Everything that's out in the open can get acquired or simply copied. Maybe that is what Thinking Machines is hoping as well?
Publish a Demo -> acquihire for anthropic/oAI/GOOG/META stock and cash is an understandable economic model. In this case, I feel like they built more than would be needed though — and I hope they deploy something useful, I’d love to play with it.
they hire leading researchers, and leading researchers won't work for you unless they're able to publish
That was true 10 years ago. It’s most definitely not true now. The arms race is very real.
> leading researchers won't work for you unless they're able to publish
oh, honey.
Do we want the whole humanity to get richer, or few individuals (company owners)?
Which seems bizarre. Companies can’t afford to just give things away right?
Yes they can. Your research papers are not the whole story. It’s like google could open source their entire monorepo and very little would change. No one else could operate it.