For this #ThrowbackThursday, we will look at #ACSAC2023's second #MachineLearning #Security session. The links in this thread will lead you to the paper pdfs and the slide decks, so be sure to check them out! 1/4
Launching the session was Quiring et al.'s "On the Detection of Image-Scaling Attacks in Machine Learning", showcasing novel methods to detect subtle manipulations in scaled images for enhanced security. (https://www.acsac.org/2023/program/final/s55.html) 2/4
#ImageProcessing #ML #SecurityinML
ACSAC2023 Program – powered by OpenConf

Then followed Weeks et al.'s "A First Look at Toxicity Injection Attacks on Open-domain Chatbots", exploring the ease of injecting #toxicity post-deployment into #chatbots by malicious users. (https://www.acsac.org/2023/program/final/s155.html) 3/4
#LLM #CyberSecurity #AdversarialAttacks #AIrisks
ACSAC2023 Program – powered by OpenConf

Ending the session, we saw Park et al.'s "DeepTaster: Adversarial Perturbation-Based Fingerprinting to Identify Proprietary Dataset Use in Deep Neural Networks" demonstrating the detection of unlawful dataset use in #DNNs. (https://www.acsac.org/2023/program/final/s321.html) 4/4
#DeepLearning #DataSecurity
ACSAC2023 Program – powered by OpenConf