For this #ThrowbackThursday, we will look at #ACSAC2024's Generative AI (for) Security session. The links in this thread will lead you to the paper pdfs and the slide decks, so be sure to check them out! 1/6
The session started with Zhou et al.'s "Enhancing Database Encryption," highlighting new adaptive measures against LLM-based reverse engineering. (https://www.acsac.org/2024/program/final/s313.html) 2/6
#GenerativeAI #LLM #Cybersecurity #DatabaseSecurity
The second paper in this session was Bhusal et al.'s "SECURE: Benchmarking Large Language Models for Cybersecurity," introducing a benchmark to assess LLMs in realistic cybersecurity scenarios. (https://www.acsac.org/2024/program/final/s431.html) 3/6
#LLM #Cybersecurity #Benchmarking
Third came Song et al.'s "Not All Tokens Are Equal: Membership Inference Attacks Against Fine-tuned Language Models" which introduces a practical attack method WEL-MIA for privacy threats in language models. (https://www.acsac.org/2024/program/final/s467.html) 4/6
#MembershipInference #ML #Cybersecurity
Fourth was Zhang et al.'s "Stealing Watermarks of Large Language Models via Mixed Integer Programming," showcasing an attack that can compromise state-of-the-art watermark schemes. (https://www.acsac.org/2024/program/final/s355.html) 5/6
#AI #Watermarking #Cybersecurity
Finally, we had Rahman et al.'s "Towards a Taxonomy of Challenges in Security Control Implementation", who propose a taxonomy of 73 challenges to enhance cyber defense. (https://www.acsac.org/2024/program/final/s366.html) 6/6
#Cybersecurity #Securityanalysis #Securitychallenges