GitHub - NVIDIA/NemoClaw: Run OpenClaw more securely inside NVIDIA OpenShell with managed inference

Run OpenClaw more securely inside NVIDIA OpenShell with managed inference - NVIDIA/NemoClaw

GitHub

Am I missing something? Why is everyone talking about sandboxes when it comes to OpenClaw?

To me it's like giving your dog a stack of important documents, then being worried he might eat them, so you put the dog in a crate, together with the documents.

I thought the whole problem with that idea was that in order for the agent to be useful, you have to connect it to your calendar, your e-mail provider and other services so it can do stuff on your behalf, but also creating chaos and destruction.

And now, what, having inference done by Nvidia directly makes it better? Does their hardware prevent an AI from deleting all my emails?

> Am I missing something?

You are indeed missing a TON. A lot of Open Claw users don't give it everything. We give it specific access to a group of things it needs to do the things we want. If I want an agent to sit there 24/7 maximizing uptime of my service, I give it access to certain data, the GitHub repo with PR privileges, and maybe even permissions to restart the service. All of this has to be very thoughtful and intentional. The idea that the only "useful" way to use Open Claw is to give it everything is a straw man.

You could do that with say Claude Code too with rather much simpler set up.

OPs question was more around sandboxes though. To which, I would say that it's to limit unintended actions on host machine.

I want to be proven wrong, but every use case someone presents for OpenClaw is just a worse version of Claude Code, at least, so far.