> The whole idea of putting "agentic" LLMs inside a sandbox

What is the alternative? Granted you're running a language model and has it connected to editing capabilities, then I very much like it to be disconnected from the rest of my system, seems like a no-brainer.

>> The whole idea of putting "agentic" LLMs inside a sandbox sounds like rubbing two pieces of sandpaper together in the hopes a house will magically build itself.

> What is the alternative?

Don't expect to get a house from rubbing two pieces of sandpaper together?

Fitting username, if nothing else.

>>> What is the alternative?

>> Don't expect to get a house from rubbing two pieces of sandpaper together?

> Fitting username, if nothing else.

Such is my lot in life I suppose...

Now for a reasoned position while acknowledging the flippant nature of my previous post.

The original metaphor centered around expectations. If best practice when using a s/w dev tool is to sandbox it so that potential damage can be limited, then there already exists the knowledge of its use going awry at any time. Hence the need for damage mitigation. The implication being an erosion of trust in whether the tool will perform as desired or perform as allowed each time it is used.

As for the "house" part of the metaphor, use of tools to build desired solutions assumes trust of said tools to achieve project goals. Much like using building construction tools are expected to result in a house. But if all construction workers have is sandpaper, then there's no way there's going to be a house at the end of construction.

It takes more than sandpaper to get (build) a house - people, hammers, saws, etc. along with the skills of all involved. And it takes more than an LLM to deliver an acceptable s/w solution, even if its per-invocation deleterious effects are mitigated via sandboxing.