0 Followers
0 Following
1 Posts
Software Developer

https://www.lattyware.co.uk

[ my public key: https://keybase.io/latty; my proof: https://keybase.io/latty/sigs/W8a_MqlArh7H9p_JWUWnXFGx94t6slLire3_2Z9_ECM ]
This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.

Officialhttps://
Support this servicehttps://www.patreon.com/birddotmakeup

Everything to do with LLM prompts reminds me of people doing regexes to try and sanitise input against SQL injections a few decades ago, just papering over the flaw but without any guarantees.

It's weird seeing people just adding a few more "REALLY REALLY REALLY REALLY DON'T DO THAT" to the prompt and hoping, to me it's just an unacceptable risk, and any system using these needs to treat the entire LLM as untrusted the second you put any user input into the prompt.