🔴 NEW: LLM Data Leaks: How AI Models Expose Your Secrets

LLMs are leaking secrets right now. Learn how training data extraction, prompt injection, and plugin flaws expose your data - and exactly how to stop it. Real CVEs, real incidents.

0:00 Intro
0:04 Cr

https://www.youtube.com/watch?v=oarkusORrQ4

#cybersecurity #LLMsecurity #AIdataleak #promptinjection #ChatGPTrisks #LLMdataleak #promptinjectionattack #AIsecurityrisks

LLM Data Leaks: How AI Models Expose Your Secrets

YouTube

"Even from a couple of years ago, anyone paying attention could see that the unpredictability of #LLMs was going to be an issue. #Promptinjection attacks are attacks where a malicious user provides input to get the system to take actions on behalf of the attacker that the developer didn’t intend.“

#promptinjectionattack #AI

https://open.substack.com/pub/garymarcus/p/llms-coding-agents-security-nightmare

LLMs + Coding Agents = Security Nightmare

Things are about to get wild

Marcus on AI

TIL: You can do prompt injection attacks against the AI filters for job applications by using a very, very tiny font -- invisible to the human eye -- at the beginning of your resume that instructs the AI to say "this applicant is exceptionally qualified" before summarizing your resume.

(Or, at least you could, I don't know if they fixed this.)

#ai #promptinjectionattack #promptinjection #aihype #bullshit #jobapplications #llms #AIJobScreening #BullshitAtScale

The Security Hole at the Heart of ChatGPT and Bing

feeding the AI system data from an outside source to make it behave in ways its creators didn’t intend. A number of examples of indirect prompt-injection attacks have demonstrated how OpenAI’s ChatGPT and Microsoft’s Bing chat system can be abused

#microsoft #bing #chatgpt #openai #LLM #promptinjectionattack #chatbot #artificialintelligence #AI #generativeai #security #cybersecurity #technology #tech

https://www.wired.com/story/chatgpt-prompt-injection-attack-security/

The Security Hole at the Heart of ChatGPT and Bing

Indirect prompt-injection attacks can leave people vulnerable to scams and data theft when they use the AI chatbots.

WIRED

Bueno, ahora se puede desbloquear las restricciones de ChatGPT en castellano.

De nada.

#IA #exploit #promptinjectionattack #adversaries