RE: https://lingo.lol/@thedansimonson/116156498254821672

this is also why LLMs are not "another layer of abstraction". do you need to read the asm output of your compiler every time it runs? no, and having to do so would substantially reduce the value of said compiler

this is also downstream of programmers not knowing what the word "abstraction" means. to the extent LLMs correspond to any existing software idea, they are indirection, and they are indirection that adds RNG to anything they're connected to

@jcoglan I've written a short post about a related topic: LLMs are the opposite of DRY: https://cs.wellesley.edu/~pmwh/advice/aiDRY.html

I just read a security post that talked about an AI code review process successfully rejecting poisoned instructions that attempted to manipulate it into using it's permissions to let a bad actor in. The post made it seem like this was good "defense in depth" but to anyone thinking about it for a minute, you can see that this isn't reliable. There's no explanation to be had for why the bot recognized that the instructions were manipulative and didn't follow them, and thus no guarantee that it would have the same reaction again, even given the exact same input! In fact, it's highly likely that a bit better prompt engineering would indeed make the system into a liability, rather than an asset.

Peter Mawhorter