This is fine...
"We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group."

https://arxiv.org/abs/2211.03622

Do Users Write More Insecure Code with AI Assistants?

We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages. Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities. Finally, in order to better inform the design of future AI-based Code assistants, we provide an in-depth analysis of participants' language and interaction behavior, as well as release our user interface as an instrument to conduct similar studies in the future.

arXiv.org

@nblr I find this chart in Section 5 to be particularly interesting. Specifically the "I trusted the AI to produce secure code" section. For the Experiment group, the one who got secure answers for Q2 and Q3 didn't trust the AI to write secure code, but the ones who didn't provide secure answers overwhelmingly trusted it.

Also that for the question of "I solved this task securely" for Q2, the Experiment group did solve it securely were 100% confident of this. Yet, they strongly agreed to not trusting the AI to solve it securely??

Bit odd, innit?

@ApisNecros @nblr i don't think that's odd at all but rather obvious: they didn't trust the AI, did their own additional research and fixed/amended the AI code - just as other coders fix and amend the stackoverflow code they find ;)