Is your GPU trustworthy? 🤔
Today, Julian presents our work on implanting machine learning backdoors in hardware at ACSAC. Our backdoors reside within a hardware ML accelerator, manipulating models on-the-fly and invisible from outside.
https://mlsec.org/docs/2024-acsac.pdf
This work is an unusual collaboration of folks from adversarial learning and hardware security. It took some effort to design a dormant backdoor small enough to fit into an FPGA accelerator. In the end, just 30 parameter changes—0.069% of the model—were enough for success.
It's clear: hardware must not be blindly trusted. AI systems are no exception—they can be undermined by trojanized chips like malicious FPGAs or GPUs. Some reviewers called this far-fetched, but I’d rather err on the side of caution and push for stronger protection🚨