Konrad Rieck 🌈

347 Followers
112 Following
97 Posts
Machine Learning and Security,
Professor of Computer Science at TU Berlin
Websitehttp://www.mlsec.org

Did AI folks not value your security insights or vice versa? Maybe you’re submitting your papers to the wrong conference.

IEEE SaTML has you covered! We are eager to read your work on the security, privacy, and fairness of AI.

👉 https://satml.org/call-for-papers
⏰ Deadline: Sep 24

Got some hot research cooking? 🔥

The SaTML paper deadline is just 9 days away. We are looking forward to your work on security, privacy, and fairness in machine learning.

👉 https://satml.org/call-for-papers/
⏰ Sep 24

Three weeks to go until the SaTML 2026 deadline! ⏰ We look forward to your work on security, privacy, and fairness in AI.

🗓️ Deadline: Sept 24, 2025

We have also updated our Call for Papers with a statement on LLM usage, check it out:

👉 https://satml.org/call-for-papers

We’re happy to announce the Call for Competitions for SaTML!

The competition track has been a highlight of SaTML, featuring exciting topics and strong participation. If you’d like to host one for SaTML 2026, visit:

👉 https://satml.org/call-for-competitions
⏰ Deadline: Aug 6

We're excited to announce the Call for Papers for SaTML 2026, the premier conference on secure and trustworthy machine learning.

We seek papers on secure, private, and fair learning algorithms and systems.

👉 https://satml.org/call-for-papers
⏰ Deadline: Sept 24

No plans for April 9–11 yet? — Why not spend an amazing week in beautiful Copenhagen 🇩🇰, exploring cutting-edge research on trustworthy machine learning.

Join us at SaTML 2025, the premier conference on AI security, AI privacy, and AI fairness!

👉 satml.org/attend

Is your GPU trustworthy? 🤔

Today, Julian presents our work on implanting machine learning backdoors in hardware at ACSAC. Our backdoors reside within a hardware ML accelerator, manipulating models on-the-fly and invisible from outside.

https://mlsec.org/docs/2024-acsac.pdf

This work is an unusual collaboration of folks from adversarial learning and hardware security. It took some effort to design a dormant backdoor small enough to fit into an FPGA accelerator. In the end, just 30 parameter changes—0.069% of the model—were enough for success.

It's clear: hardware must not be blindly trusted. AI systems are no exception—they can be undermined by trojanized chips like malicious FPGAs or GPUs. Some reviewers called this far-fetched, but I’d rather err on the side of caution and push for stronger protection🚨

No plans for April 9–11 yet? Why not spend a fantastic week in beautiful Copenhagen, exploring top research on trustworthy machine learning?

Registration for IEEE SaTML is now open: https://satml.org

We are also offering travel scholarships: https://satml.org/scholarships/

IEEE SaTML

IEEE Conference on Secure and Trustworthy Machine Learning

🚨We’re thrilled to announce the keynote speakers for Michael Veale (@mikarv), Kamalika Chaudhuri (UCSD), and Matt Turek (DARPA).

👉 https://satml.org/keynotes/

Don’t miss out on #SaTML2025 in Copenhagen🇩🇰, April 2025!

Keynotes

🚨 We are extending the Call for Papers for the 3rd IEEE Conference on Secure and Trustworthy Machine Learning (@satml_conf)!

👉 satml.org/participate-cf…
⏰ New Deadline: Sep 27

This extension gives you more time to submit your best work on secure AI algorithms and systems😉