@jcoglan I've written a short post about a related topic: LLMs are the opposite of DRY: https://cs.wellesley.edu/~pmwh/advice/aiDRY.html
I just read a security post that talked about an AI code review process successfully rejecting poisoned instructions that attempted to manipulate it into using it's permissions to let a bad actor in. The post made it seem like this was good "defense in depth" but to anyone thinking about it for a minute, you can see that this isn't reliable. There's no explanation to be had for why the bot recognized that the instructions were manipulative and didn't follow them, and thus no guarantee that it would have the same reaction again, even given the exact same input! In fact, it's highly likely that a bit better prompt engineering would indeed make the system into a liability, rather than an asset.