This is fine...
"We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group."

https://arxiv.org/abs/2211.03622

Do Users Write More Insecure Code with AI Assistants?

We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages. Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities. Finally, in order to better inform the design of future AI-based Code assistants, we provide an in-depth analysis of participants' language and interaction behavior, as well as release our user interface as an instrument to conduct similar studies in the future.

arXiv.org

@nblr Given how badly the models deal w/ poisoning: https://arxiv.org/abs/2311.12202

I think folks actively trying to exploit these models to plant easily exploitable code is only a matter of time.

If you are not fully code reviewing all the code and treating it as if it were adversarial then you are asking for trouble. One has to ask themselves at that point is it worth it?

Nepotistically Trained Generative-AI Models Collapse

Trained on massive amounts of human-generated content, AI-generated image synthesis is capable of reproducing semantically coherent images that match the visual appearance of its training data. We show that when retrained on even small amounts of their own creation, these generative-AI models produce highly distorted images. We also show that this distortion extends beyond the text prompts used in retraining, and that once affected, the models struggle to fully heal even after retraining on only real images.

arXiv.org
@shafik That's the real tradeoff at hand indeed.
@nblr I think Computer Science programs not having philosophy and ethics deeply integrated into their programs has really served society very poorly.