if I have to hear the term "prompt injection" one more time
you cannot have an "injection attack" in a system with no formal distinction between data and instructions. what you actually have is an "everything is instructions" model and a failure to isolate untrusted inputs from the elevated privilege of access to private information

@jcoglan When I first met @suhacker years ago, I actually asked about this sort of thing (using "prepared statements" from SQL as an analogy), but she patiently explained how much worse the whole ecosystem is than I was imagining with my (admittedly naive) question.

The amount of Pickle exploits is too damn high

@soatok @suhacker right, whereas LLMs are analogous to having your API be that user agents just send SQL to the server; end users have whatever privileges the entire server has
@jcoglan @soatok @suhacker
Thank you! Phrasing it this way finally made it click for me.
@jaystephens @jcoglan @soatok @suhacker even this is unfair to SQL. SQL servers at least have role-based access control.