It's quite interesting. Basically Nitro on a stick. For the "repatriation" crowd this seems appealing. But would you invest in the software necessary to exploit this, knowing that Intel could lose interest or just go bankrupt with little warning?

Presumably all hyperscalers who aren't Amazon could be a customer for this? One of them might be enough to keep it viable. See sibling comment on b Google being a customer for presumably the previous generation.

I think at this point, it's clear that the US government will not let Intel go bankrupt without a serious effort to put the company in healthy financial standing first.

Whether or not that's a good thing, well, people have their opinions, but they're considered a national security necessity.

I wouldn't be surprised if Google buys the IP since they're the only customer.

How, though? Does the TPU team (literally or logically) map to owning IPU h/w successfully?

(I miss having these kinds of convos on twitter as networkservice ;)

There's a lot more silicon at Google aside from the TPU team, including their own previous NICs.

Not that my memory is ironclad, but I don’t recall any custom IP or even FPGA attempts at Google re: host networking or NICs. Any good search terms I should try to enlighten myself? thanks!

I believe they have other custom silicon beyond TPUs so it wouldn't be crazy to take this in house if Intel really cans it.

That begs the question: how would one go about utilising this thing in their own deployment?

The primary customer for this would be infrastructure providers that want to give the host full control of the hardware (bare metal, no hypervisor) while still maintaining control of the IO (network attached storage and network isolation).

Conventionally this is done in software with a hypervisor which emulates network devices for VMs (virio/vmxnet3, etc...) and does some sort of network encapsulation (vlan, vxlan, etc...). Similar things are done for virtual block storage (virtio blk, nvme, etc..) to attach to remote drives.

If the IaaS clients are high bandwidth or running their own virtualization stack, the infrastructure provider has nowhere to put this software. You can do the infrastructure network and storage isolation on the network switches with extra work but then the termination of the networking and storage has to be done in cooperation with the clients (and you can't trust them to do it right).

Here, the host just sees PCI attached network interfaces and directly attached NVMe devices which pop up as defined by the infrastructure. These cards are the compromise where you let everyone have baremetal but keep your software defined network and storage. In advanced cases you could even dynamically traffic shape bandwidth between network and storage prioritization.

Here are some examples: https://ipdk.io/documentation/Recipes/ (keep in mind IPU = E2200 when you read this)

Presumably first hire a few developers to program it.