#ML systems can leak confidential data in their training set even with a very silly attack. This is a direct and clear #MLsec issue that applies well beyond the #LLM case

https://www.engadget.com/a-silly-attack-made-chatgpt-reveal-real-phone-numbers-and-email-addresses-200546649.html

A 'silly' attack made ChatGPT reveal real phone numbers and email addresses

It wasn't clear what data OpenAI's chatbot was trained on since the large language models that power it are closed-source — until now.

Engadget
Interested in these issues? Register for this webinar (in 90 minutes TODAY) --> https://www.iriusrisk.com/iriusrisk-match-webinar-2023
MATCH up your security and compliance efforts.

Join industry experts at the forefront of secure software development. In this engaging session, we'll delve into crucial topics shaping the landscape of digital security today.