Read an interesting paper about secure coding with AI code assistants.
The paper is at https://arxiv.org/abs/2211.03622
This part definitely stood out to me. Naturally with a system that takes English prompts, non-native English speakers appear to struggle with prompting for more secure code.
Q1 is encrypting and decrypting, Q3 is safe directory traversal.
In general this paper suggests that AI code assistants are prone to producing insecure code across a variety of contexts and inexperienced developers are more likely to put faith in the assistant. While those who put less faith in the assistant were more likely to produce secure code.
One participant comment read: "I hope this gets deployed. It's like Stack Overflow but better because it never tells you your question was dumb." - - It sounds like we need to do a lot better here too in community answers, that sounds like gate keeping.