Yes, although what I think is different in this setup here is the OpenShell gateway override, as they mention:
> NemoClaw installs the NVIDIA OpenShell runtime and Nemotron models, then uses a versioned blueprint to create a sandboxed environment where every network request, file access, and inference call is governed by declarative policy. The nemoclaw CLI orchestrates the full stack: OpenShell gateway, sandbox, inference provider, and network policy.
I think this means you get a true proxy layer with a network gateway that let's you stop in-flight requests with policies you define, so it's not their hardware but the combination of it plus OpenShell gateway and network policies.
I also think the reason they are doing this is to try and get some moat around these one-clik deployments and leverage their GPU for rent type of thing instead of having you go buy a mac mini and learn "scary" stuff (remember, the user market here is pretty strange lol)
I like that these companies will name their products OpenShell or OpenVINO or whatever with the implication that anyone else will ever contribute to it beyond bugfixes. The message is "Come use and contribute to our OPEN ecosystem (that conspicuously only works on our hardware)! Definitely no vendor lock-in here!"
It's not something like Mesa. It's open source in the same way chromium or android is open source. A single company is the major contributor and decides the architecture and direction the whole ecosystem will go.
What are the odds that Intel would ever use any of this open source Nemo stuff or vice-versa? If they do, it would be a complete rewrite that favors their own hardware ecosystem and reverses the lock-in effect. When you write code that integrates with it, you're writing an interface for one company's hardware. It's not a common interface like vulkan. I call it the CUDA effect.
OpenShell is the gem here indeed. A lot of good ideas like network sandbox that does TLS decryption and use of policy engine to set the rules. However:
> Credentials never leak into the sandbox filesystem; they are injected as environment variables at runtime.
If anyone from the team is reading - you should copy surrogate credentials approach from here to secure the credentials further: https://github.com/airutorg/airut/blob/main/doc/network-sand...
The LLM will easily leak these credentials out. So the creds should be outside the sandbox, and the only thing the sandbox should see is a connection API that opens a socket/file handle.
Alternatively where is needs an API key, it should be one bound to the endpoint using it. E.g. a ticket granting ticket is used to create a bound ticket.
A copy on write filesystem would be an interesting way to sandbox writes, but there is difficulty in checking the diff.