I presented “NICraft: Malicious NIC Firmware-based Cache Side-channel Attack” at ESORICS 2025.
We show a cache side-channel launched from the NIC itself. We devised new signal amplification (Aging + Domino) to turn small evictions into a clear timing gap. The attack requires no RDMA/DDIO, no kernel/driver mods.
Thank you for attending and for the great discussion!

Slides: https://github.com/amit-choudhari/NICraft/releases/download/slides/NICraft_esorics.pdf
Paper: https://cispa.saarland/group/rossow/papers/nicraft-esorics2025.pdf

With @rossow and Shorya Kumar
#ESORICS #sidechannel #NIC #SmartNIC

I am very happy that two papers from @lunkw1ll have been accepted at the 30th European Symposium on Research in Computer Security (#ESORICS). It was a great collaboration with @lavados, @hweissi and others. The first paper addresses the threats and problems to #Rowhammer research validity. The second paper presents a method for verifying DRAM addressing functions entirely in software. 🎉

We are organising #HS3Workshop 2025: the 1st Workshop on Hardware-Supported Software Security, co-located with #ESORICS on 2025-09-25 in Toulouse, France.

#CfP: Hardware-based security mechanisms to protect the software stack. Special theme around secure monitoring and intrusion detection. Submission deadline: 2025-06-13 AoE.

More info: https://hs3-workshop.github.io/2025.html

#cybersecurity #hardware #software

HS3 2025: 1st Workshop on Hardware-Supported Software Security

A workshop at ESORICS 2025, 25th of September 2025 in Toulouse, FR.

HS3 2025: Hardware-Supported Software Security
Irfan Bulut from COSIC presenting "Machine Learning-Based Secure Malware Detection with Feature Extraction from Binary Executable Headers" at #SECAI 2024 (in conjunction with #ESORICS 2024)
https://sites.google.com/view/secai2024/programme?authuser=0
SECAI 2024 - Programme

Programme A total of 42 outstanding submissions were received. The selection process was conducted rigorously, resulting in the task of choosing papers becoming exceptionally tough. After thorough consideration, 15 papers were selected to be presented at the SECAI 2024 on 19-20th September.

Last week at #ESORICS in The Hague, our PhD student @marik0 presented the paper "The Power of MEME: Adversarial Malware Creation with Model-Based Reinforcement Learning". This work proposes a new algorithm that combines Malware Evasion and Model Extraction (MEME) attacks.

Read the full paper at https://arxiv.org/abs/2308.16562v1

#cybersecurity #redteam #infosec #ML #RL #adversarialattacks

The Power of MEME: Adversarial Malware Creation with Model-Based Reinforcement Learning

Due to the proliferation of malware, defenders are increasingly turning to automation and machine learning as part of the malware detection tool-chain. However, machine learning models are susceptible to adversarial attacks, requiring the testing of model and product robustness. Meanwhile, attackers also seek to automate malware generation and evasion of antivirus systems, and defenders try to gain insight into their methods. This work proposes a new algorithm that combines Malware Evasion and Model Extraction (MEME) attacks. MEME uses model-based reinforcement learning to adversarially modify Windows executable binary samples while simultaneously training a surrogate model with a high agreement with the target model to evade. To evaluate this method, we compare it with two state-of-the-art attacks in adversarial malware creation, using three well-known published models and one antivirus product as targets. Results show that MEME outperforms the state-of-the-art methods in terms of evasion capabilities in almost all cases, producing evasive malware with an evasion rate in the range of 32-73%. It also produces surrogate models with a prediction label agreement with the respective target models between 97-99%. The surrogate could be used to fine-tune and improve the evasion rate in the future.

arXiv.org

Watch the demo we prepared for our research, "LLM in the Shell: Generative Honeypots", that was presented last week as a poster in The Hague at #ESORICS.

Read our short paper at https://arxiv.org/abs/2309.00155.

#honeypots #networkdefense #CyberSecurity #infosec

https://www.youtube.com/watch?v=0ysdHanr-jA

LLM in the Shell: Generative Honeypots

Honeypots are essential tools in cybersecurity for early detection, threat intelligence gathering, and analysis of attacker's behavior. However, most of them lack the required realism to engage and fool human attackers long-term. Being easy to distinguish honeypots strongly hinders their effectiveness. This can happen because they are too deterministic, lack adaptability, or lack deepness. This work introduces shelLM, a dynamic and realistic software honeypot based on Large Language Models that generates Linux-like shell output. We designed and implemented shelLM using cloud-based LLMs. We evaluated if shelLM can generate output as expected from a real Linux shell. The evaluation was done by asking cybersecurity researchers to use the honeypot and give feedback if each answer from the honeypot was the expected one from a Linux shell. Results indicate that shelLM can create credible and dynamic answers capable of addressing the limitations of current honeypots. ShelLM reached a TNR of 0.90, convincing humans it was consistent with a real Linux shell. The source code and prompts for replicating the experiments have been publicly available.

arXiv.org

Our research paper "The Power of MEME: Adversarial Malware Creation with Model-Based Reinforcement Learning" by @marik0 and @eldraco was accepted in ESORICS 2023. Read more at: https://arxiv.org/abs/2308.16562

#ai #adversarial #MachineLearning #malware #security #esorics #offensiveML

The Power of MEME: Adversarial Malware Creation with Model-Based Reinforcement Learning

Due to the proliferation of malware, defenders are increasingly turning to automation and machine learning as part of the malware detection tool-chain. However, machine learning models are susceptible to adversarial attacks, requiring the testing of model and product robustness. Meanwhile, attackers also seek to automate malware generation and evasion of antivirus systems, and defenders try to gain insight into their methods. This work proposes a new algorithm that combines Malware Evasion and Model Extraction (MEME) attacks. MEME uses model-based reinforcement learning to adversarially modify Windows executable binary samples while simultaneously training a surrogate model with a high agreement with the target model to evade. To evaluate this method, we compare it with two state-of-the-art attacks in adversarial malware creation, using three well-known published models and one antivirus product as targets. Results show that MEME outperforms the state-of-the-art methods in terms of evasion capabilities in almost all cases, producing evasive malware with an evasion rate in the range of 32-73%. It also produces surrogate models with a prediction label agreement with the respective target models between 97-99%. The surrogate could be used to fine-tune and improve the evasion rate in the future.

arXiv.org

🎉 Our Paper on IPv6 Fragment Handling Accepted at ESORICS 2023! 🎉

I am thrilled to share that our paper "A New Model for Testing IPv6 Fragment Handling" has been accepted for presentation at the ESORICS 2023 conference! This is a collaborative effort of Edoardo Di Paolo, Angelo Spognardi, and myself, delving deep into the realm of network security.

Thank you to everyone who has supported us on this journey 🚀🔒

See you at #ESORICS 2023 !

#NetworkSecurity #ESORICS2023 #Research #Cybersecurity