The hilarious part about people writing "skills" or prompts is that no matter how much they write… they're still interfacing with nondeterministic models

"You **must never** `rm -rf /folder` ever!"

1. There's no guarantee this won't happen (only a decreased likelihood)
2. This has now added something it shouldn't do as part of its context… potentially increasing the likelihood of it happening

This is fine

@b3ll relying a prompt for security is wrong, of course, but that’s not the point of adding the instruction.

And it’s not like humans are particularly good at following instructions. 😆

@me1000 oh for sure, I'm just saying there's plenty of "experts" that don't understand that fundamental
@b3ll fair enough. I suspect we’re one major prompt injection away from a reckoning with LLMs that are allowed free range on peoples’ computers. But right now everyone seems to really want to push the edge of what’s possible. 💀