A LLM can never be used where both "receive untrusted input" and "do a bad thing" are both true.

RE: https://bsky.app/profile/did:plc:5oej2vum42qalvecfspsupmb/post/3mgdl2royhs2c