matthewdgreen

0 Followers
0 Following
8 Posts

This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup
@[email protected]
Twitter@[email protected]
If we require every whistleblower to be a saint, then we’ll never hear a whistle. If you have a serious criticism of their credibility, that’s potentially different, but arbitrary criticisms of someone’s moral worth is mostly irrelevant.
Don't worry, everything will be expensive because the US decided to blow up half the world's oil supply.
The threat model described in TFA is that someone convinces your agent via prompt injection to exfiltrate secrets. The simple way to do this is to make an outbound network connection (posting with curl or something) but it’s absolutely possible to tell a model to exfiltrate in other ways. Including embedding the secret in a Unicode string that the code itself delivers to outside users when run. If we weren’t living in science fiction land I’d say “no way this works” but we (increasingly) do so of course it does.

The premise of TFA as understood it was that we have lethal trifecta risk: sensitive data getting exfiltrated via coding agent. The two solutions were sandboxing to limit access to sensitive data (or just running the agent on somebody else’s machine) and sandboxing to block outbound network connections. My only point here is that once you’ve accepted the risk that the model has been rendered malicious by prompt injection, locking down the network is totally insufficient. As long as you plan to release the code publicly (or perhaps just run it on a machine that has network access), it has an almost disturbingly exciting number of ways it can do data exfiltration via the code. And human code review is unlikely to find many of them, because the number of possibilities for obfuscation is so huge you’ve lost even if you have an amazing code reviewer (and let’s be honest, at 7000 SloC/day nobody is a great code reviewer.)

I think this is exciting and if I was teaching an intro security and privacy course I’d be urging my students to come up with the most exciting ideas for exfiltrating data, and having others trying to detect it through manual and AI review. I’m pretty sure the attackers would all win, but it’d be exciting either way.

Yes. If by "subtly obfuscated" you mean anything from 'tucked into a comment without encoding, where you're unlikely to notice it', to 'encoded in invisible Unicode' to 'encoded in a lovely fist of Morse using an invisible pattern of spaces and tabs'.

I don't know what models are capable of doing these days, but I find all of these things to be plausible. I just asked ChatGPT to do this and it claimed it had; it even wrote me a beautiful little Python decoder that then only succeeded in decoding one word. That isn't necessarily confirmation, but I'm going to take that as a moral victory.

Understood. I read the article as “here is how to do YOLO coding safely”, and part of the “safely” idea was to sandbox the coding agent. I’m just pointing out that this, by itself, seems insufficient to prevent ugly exfiltration, it just makes exfiltration take an extra step. I’m also not sure that human code review scales to this much code, nor that it can contain that kind of exfiltration if the instructions specify some kind of obfuscation.

Obviously your recommendation to sandbox network access is one of several you make (the most effective one being “don’t let the agent ever touch sensitive data”), so I’m not saying the combined set of protections won’t work well. I’m also not saying that your projects specifically have any risk, just that they illustrate how much code you can end up with very quickly — making human review a fool’s errand.

ETA: if you do think human review can prevent secret exfiltration, I’d love to turn that into some kind of competition. Think of it as the obfuscated C contest with a scarier twist.

He wrote 14,000 lines of code in several days. How much review is going on there?

So let me get this straight. You’re writing tens of thousands of lines of code that will presumably go into a public GitHub repository and/or be served from some location. Even if it only runs locally on your own machine, at some point you’ll presumably give that code network access. And that code is being developed (without much review) by an agent that, in our threat model, has been fully subverted by prompt injection?

Sandboxing the agent hardly seems like a sufficient defense here.