RT @thecamjackson
This 100% matches my experience with AI codegen.

It writes code that looks right at a glance, but is wrong in subtle ways. That's what an attacker would do if trying to get a vulnerability into a codebase.

It's so much more dangerous than code which is obviously wrong. https://twitter.com/jjvincent/status/1599743434360639489

James Vincent on Twitter

“StackOverflow has temporarily banned users from posting AI-generated responses from ChatGPT, with mods saying the volume of incorrect but plausible-looking replies was just too great for them to deal with. Details here: https://t.co/4U8dqOzGi2”

Twitter
@mfowler I liken copilot to pairing with a drunk friend. You need to review every line, and often just dismiss the suggestions. But usually it gets you going in roughly the right way.
@mfowler
I think the opportunity to code vulnerabilities is possible as easily from someone's own hands as SO copy paste or any other tool generating code. It's also why there are additional tools to look for code vulnerabilities. It's very human to game a system and looks like regulating is the approach SO chose. Any tool used w/o understanding adds risk but the real risk is thinking it does more than it actually does. I get the gist, just see it as more nuanced.