A subtoot and a forewarning.
A program which can't discern between instructions and data will can't be safe or secure.
@jalefkowit
It's worth reminding that LLMs can't discern between instructions and data.
Also Bobby Tables
A subtoot and a forewarning.
A program which can't discern between instructions and data will can't be safe or secure.
RE: https://infosec.exchange/@alevsk/115272151364958235
> How do you secure AI agents from prompt injections and misalignment risks?
That's the neat part, you can't 🤭
LLMs don't discern between instructions and data. Thus #TheresAlwaysAPromptInjection
@georgramer
Rather that ratio will infinitely approach 100% because there's always a “gotcha”, there's always a prompt injection (#TheresAlwaysAPromptInjection), and there's always the misconceptions, or the outright lies, what those models can and can't do.
@briankrebs
To me it reminded about an almost identical hole in Copilot.
Published. A. Month. Ago.