We have uploaded our preprint "Generalized Adversarial Code-Suggestions: Exploiting Contexts of LLM-based Code-Completion". We investigate if code models can be tricked into suggesting vulnerable code. This malicious effect can be achieved without adding vulnerable code to the training data. Our attacks by-pass static analysis. None of the evaluated defenses can prevent our attack effectively, except for Fine-Pruning, which requires a trusted data set — which is the problem in the first place.











