GitHub - NVIDIA/NemoClaw: Run OpenClaw more securely inside NVIDIA OpenShell with managed inference

Run OpenClaw more securely inside NVIDIA OpenShell with managed inference - NVIDIA/NemoClaw

GitHub

Am I missing something? Why is everyone talking about sandboxes when it comes to OpenClaw?

To me it's like giving your dog a stack of important documents, then being worried he might eat them, so you put the dog in a crate, together with the documents.

I thought the whole problem with that idea was that in order for the agent to be useful, you have to connect it to your calendar, your e-mail provider and other services so it can do stuff on your behalf, but also creating chaos and destruction.

And now, what, having inference done by Nvidia directly makes it better? Does their hardware prevent an AI from deleting all my emails?

> Am I missing something?

You are indeed missing a TON. A lot of Open Claw users don't give it everything. We give it specific access to a group of things it needs to do the things we want. If I want an agent to sit there 24/7 maximizing uptime of my service, I give it access to certain data, the GitHub repo with PR privileges, and maybe even permissions to restart the service. All of this has to be very thoughtful and intentional. The idea that the only "useful" way to use Open Claw is to give it everything is a straw man.

Can you talk us through that a bit more? I suspect it would need more access than the permissions you mentioned to be more useful than a simple rules based automation.
The problem is boundary enforcement fatigue. People become lazy, creating tight permission scopes is tedious work. People will use an LLM to manage the scopes given to another LLM, and so on.
100% this. Human psychology is always overlooked in these discussions, and people focus on "perfect technical solution" without considering how humans will actually end up using them. Linux permissions schema are a classic example, with many guides advising users to keep everything as locked down as possible, and expanding permissions as and when required. After the 100th time of fucking around with chmod, users often give up and just make everything 777. If there were a user-friendly (but imperfect) method (like Windows' UAC), people would actually use it, and be far safer in the long run.

> creating tight permission scopes is tedious work

I have a feeling this kind of boundary configuration is the bread and butter of the current AI software landscape.

Once we figure out how to make this tedious work easier a lot of new use cases will get unlocked.

I definitely think we'll write tools to analyse the permissions and explain the worst case outcomes.

I can accept burning tokens and redo on the scale of hours. If I'm losing days of effort I'd be very dissatisfied. Practically speaking people accept data loss because of poor backups, because backups are hard (not technically so much as administratively), but I'd say backups are about to become more important. Blast limiting controls will become essential -- being able to delete every cloud hosted photo is just a click away. Spinning up thousands of EC2 nodes is incredibly easy, and credit cards have extremely weak scoping.

You could do that with say Claude Code too with rather much simpler set up.

OPs question was more around sandboxes though. To which, I would say that it's to limit unintended actions on host machine.

I want to be proven wrong, but every use case someone presents for OpenClaw is just a worse version of Claude Code, at least, so far.
So, what does having inference done by NVIDIA directly add?

Why would I want non-deterministic behavior here though?

If I want to max uptime, I write a tool to track/monitor. Then write a small agent (non-ai) that monitors those outputs and performs your remediation actions (reset something, clear something, etc, depends on service).

Do I want Claude re-writing and breaking subscription flow because it detected an issue? No.