The hilarious part about people writing "skills" or prompts is that no matter how much they write… they're still interfacing with nondeterministic models

"You **must never** `rm -rf /folder` ever!"

1. There's no guarantee this won't happen (only a decreased likelihood)
2. This has now added something it shouldn't do as part of its context… potentially increasing the likelihood of it happening

This is fine

@b3ll relying a prompt for security is wrong, of course, but that’s not the point of adding the instruction.

And it’s not like humans are particularly good at following instructions. 😆

@me1000 oh for sure, I'm just saying there's plenty of "experts" that don't understand that fundamental
@b3ll fair enough. I suspect we’re one major prompt injection away from a reckoning with LLMs that are allowed free range on peoples’ computers. But right now everyone seems to really want to push the edge of what’s possible. 💀

@me1000 @b3ll this seems interesting. Put the secure model below the agent so it can’t interact with it at all.

https://nono.sh/

Next-Generation Agent Security | nono

Kernel-enforced isolation, network filtering, immutable auditing, and atomic rollbacks for AI agents - built into the nono CLI and native SDKs.

@schwa @me1000 I guess the future is putting everything in hypervised jails and bonking LLMs a bunch