In the coderunner read me it talks about reading files without sending them to the cloud. Does that mean there is something agentic going on? That’s more than I expect from something called a sandbox.
Also if it is agentic, why is it less cloud based than eg Claude code? Are there LLMs running locally?
Regarding files, they are volume mapped (local <--> sandbox) as with docker.
It's not agentic - agents can use it to execute code. Those agents can be powered by any LLM including local.
I’m still not sure why sending files to the cloud is supposed to be a disadvantage of other approaches but not this one. Whether you run your LLM’s commands in this sandbox or not, content is going to the cloud if the LLM is in the cloud, and not going to the cloud if the LLM is local. It looks like the amount of data in the cloud is entirely orthogonal to whether you use coderunner.
If you run your LLMs locally then nothing goes to the cloud. If you use cloud offerings then of course nothing is going to help you.
I think their point is more that that architecture of this CodeRunner program isn't very clear.
It's unclear if it is a container manager, or comes with a LLM Agent built in. These are two separate concerns and the README makes it very unclear how to use one without the other.
I could say the same about any AI architecture. By definition cloud = cloud, local = not cloud. So when coderunner advertises ~ “more privacy because less cloud” I’m not sure what it is about coderunner that helps me get less cloud than anything else.