> Kinda like a CD-ROM/Game cartridge, or a printed book, it only holds one model and cannot be rewritten.

Imagine a slot on your computer where you physically pop out and replace the chip with different models, sort of like a Nintendo DS.

That slot is called USB-C. I can fully imagine inference ASICs coming in powerbank form factor that you'd just plug and play.

Like the chip-software in Gibson’s sprawl, from the micro-soft to the ROM cowboy to the Aleph, the endgame of computertool distribution is via single-use chunks of quasi-biological computronium

Michael Bay just read "computronium" and spawned an 8 movie franchise in his head.

This would be a hell of a hot power bank. It uses about as much power as my oven. So probably more like inside a huge cooling device outside the house. Or integrated into the heating system of the house.

(Still compelling!)

*the whole server uses 2.2kw or whatever, not a single board. I think that was for 8 boards or something.

Oh does it? Thanks for the clarification then. Their home page said 2.5kW so I assumed that's what it is.

To be fair, 2.5kW does sound too much for a single 3x3cm chip, it would probably melt.

More powwwwaaa!

Yeah, though I suppose once we get properly 3d silicon I would not be surprised at power rating for that, 3cm^3 would be something to behold.

> USB-C

With these speeds you can run it over USB2, though maybe power is limiting.

You would likely need external power anyway.

USB-C is just a form factor and has nothing to do with which protocol you run at which speeds.

I wasn't talking about the form factor.

Not if you need 200w power to run inference.

USB-C can do up to 240W. These days I power all my devices with a USB hub, even my Lipo charger.

Have you seen a device that can supply 240w and act as a data host? Or is the 240w only from dedicated chargers?

I haven't seen one, but I also don't tend to use it for anything other than a power supply, so I wouldn't know. Since the standard supports it, though, it's just a matter of the market needing a device like that.

Pretty sure it'd just be a thumbdrive. Are the Taalas chips particularly large in surface area?

The only product they've announced at the moment [0] is a PCI-e card. It's more like a small power bank than a big thumb drive.

But sure, the next generation could be much smaller. It doesn't require battery cells, (much) heat management, or ruggedization, all of which put hard limits on how much you can miniaturise power banks.

[0] https://taalas.com/the-path-to-ubiquitous-ai/

I wouldn't call that size a small power bank. That chip is in the same ballpark as gaming GPUs, and based on the VRMs in the picture it probably draws about as much power.

But as you said, the next generations are very likely to shrink (especially with them saying they want to do top of the line models in 2 generations), and with architecture improvements it could probably get much smaller.

I’m old enough to remember your typical computer filling warehouse-sized buildings.

Nowadays, your average cellphone has more computing power than those behemoths.

I have a micro SD card with 256GB capacity, and I think they are up to 2TB. On a device the size of a fingernail.

That is all definitely amazing, but data storage is a fundamentally different process with far fewer constraints than continuous computation.

It all uses the same miniaturization techniques, though.

800 mm2, about 90mm per side, if imagined as a square. Also, 250 W of power consumption.

The form factor should be anything but thumbdrive.

mmmhhhhh 800mm2 ~= (30mm)2, which is more like a (biggish) thumb drive.

Thanks!

I haven't had my coffee yet. ;)

Shit happens :D

always after the coffee :)

the radiator wouldn't be though

That's the kind of hardware am rooting for. Since it'll encourage Open weighs models, and would be much more private.

Infact, I was thinking, if robots of future could have such slots, where they can use different models, depending on the task they're given. Like a Hardware MoE.

> Since it'll encourage Open weighs models

Is this accurate? I don't know enough about hardware, but perhaps someone could clarify: how hard would it be to reverse engineer this to "leak" the model weights? Is it even possible?

There are some labs that sell access to their models (mistral, cohere, etc) without having their models open. I could see a world where more companies can do this if this turns out to be a viable way. Even to end customers, if reverse engineering is deemed impossible. You could have a device that does most of the inference locally and only "call home" when stumped (think alexa with local processing for intent detection and cloud processing for the rest, but better).

It's likely possible to extract model weights from the chip's design, but you'd need tooling at the level of an Intel R&D lab, not something any hobbyist could afford.

I doubt anyone would have the skills, wallet, and tools to RE one of these and extract model weights to run them on other hardware. Maybe state actors like the Chinese government or similar could pull that off.

This is what I've been wanting! Just like those eGPUs you would plug into your Mac. You would have a big model or device capable of running a top-tier model under your desk. All local, completely private.

A cartridge slot for models is a fun idea. Instead of one chip running any model, you get one model or maybe a family of models per chip at (I assume) much better perf/watt. Curious whether the economics work out for consumer use or if this stays in the embedded/edge space.

Plug it into skull bone. Neuralink + slot for a model that you can buy in s grocery store instead of prepaid Netflix card.

We better solve the energy usage and cooling first otherwise that will be a very spicy body mod.

Would somewhat work except for the power usage.

I doubt it would scale linearly, but for home use 170 tokens/s at 2.5W would be cool; 17 tokens/s at 0,25W would be awesome.

On the other hand, this may be a step towards positronic brains (https://en.wikipedia.org/wiki/Positronic_brain)

Yeah maybe you can call it PCIe.