Have you ever told a secret to a friend and later regretted it?
At #NDSS23, Alex will present our approach to machine unlearning of features and labels. At least for learning models, we can remove information in retrospection🫠.
Unlike previous work, which is limited to forgetting points, we can unlearn arbitrary feature values from a trained model. This is possible because we estimate their influence and compute a reverse model update (like a hangover🍺)
Check out our paper https://www.mlsec.org/docs/2023-ndss.pdf



