Steganography Exploits LLMs with Hidden Text Techniques

Want to hide text in plain sight? Try using white text on a white background or black text on a black background - simple yet effective visual tricks that can evade human eyes while remaining readable by machines.

https://osintsights.com/steganography-exploits-llms-with-hidden-text-techniques?utm_source=mastodon&utm_medium=social

#Steganography #HiddenTextTechniques #LlmExploits #VisualObfuscation #MachineReadability

Steganography Exploits LLMs with Hidden Text Techniques

Discover steganography techniques that exploit LLMs with hidden text, learn how to bypass detection with simple visual tricks and take action now to protect your data.

OSINTSights

A team of community members from Malaga-AI has contributed a one-of-a-kind initiative to the larger DeepLearning.AI community on LLM exploits, and they will be presenting their work this coming Wednesday 🤩

Tune in at 18:30 CET for the online session 👀 (zoom link available in the event description)

Shout out to A. Rosa Castillo, Nicolás Felipe Trujillo Montero, Antonio José Muñoz Escobar, Manuel Martin Mairal and Anna K. for their excellent contribution 👏

#LLMExploits

https://community.deeplearning.ai/t/llms-exploits-project-close/636420

LLMs Exploits Project Close

Hello DeepLearning.AI Community: The LLMs Exploits project brings together the contributions of several community members within a common goal: bringing awareness about the different limitations, vulnerabilities and risks that people can face when using LLMs. In this talk we will share our findings from this AI project collaboration.

DeepLearning.AI

"But as clever users have found in the past, if you ask an AI bot to pretend to be someone else, that appears to be all you need to give it permission to say naughty things. This time, it isn’t just enough to get the chat bot to say things it’s not supposed to, but rather have it do so while assuming the role of a kind, elderly relative."

https://kotaku.com/chatgpt-ai-discord-clyde-chatbot-exploit-jailbreak-1850352678

#LLMExploits #Derp #BadGrandma

People Are Using A ‘Grandma Exploit’ To Break AI

Apparently ChatGPT is willing to share the secrets of napalm and linux malware, told to you as if from your sweet grandma

Kotaku