This study from Stanford shows that people who use GitHub copilot produce code with more security flaws than people who don't; it's roughly the same size as the study GitHub keeps quoting saying it makes developers faster. https://www.theregister.com/2022/12/21/ai_assistants_bad_code/
Study finds AI assistants help developers produce code that's more likely to be buggy

At the same time, tools like Github Copilot and Facebook InCoder make developers believe their code is sound

The Register
@seldo it's important to me that I ship my bugs and security flaws to prod as quickly as possible. Gotta keep that velocity metric up!
@tylerlwsmith if ML can predict what you're going to write why are you writing it?
@seldo great point! I’ve just promoted myself to manager of the robot 😁
@seldo How come no one else has said “move fast and break things”?
@seldo I guess a bit like ChatGPT being used to generate technical articles. Great at producing something that looks reasonable and believable to the untrained eye, but a close inspection by someone experienced in the topic usually finds a load of errors.
@seldo the speed at which people can produce bad code is not a great metric.
@seldo The real question is: how much of this stuff ends up in prod? And how many of the security issues stem from undefined behavior that's impossible in more secure languages?
@HoloPengin @seldo you can write insecure code in any language. How it’s insecure may change.
@laffer1 @seldo True, but memory safety issues are a pretty big error class that is rather difficult to cause in Rust
@seldo can't say I find it surprising. If the user isn't scrutinizing what it generates (and I'd guess many aren't), it's really not so different from copy/pasting directly from StackOverflow. We all know how that goes.
@seldo well it's just the other face of the coin: they "write" code faster but they don't fully understand it, making it more flawed.
@seldo move fast and introduce vulnerabilities
@seldo this research is important but the reporting seems questionable, it doesn't mention that the paper isn't peer reviewed. the paper is also based on the public version of OpenAI Codex, and it doesn't seem clear that Copilot is the same model

@seldo

It's how Skynet starts...

@seldo this makes a lot of sense, since code reviews are a lot harder when you can’t communicate with the author of the code.
@seldo @krzyzanowskim AI trained on buggy human code generates buggy computer code
@seldo blind copy paste doesn't work? Hrmmm ):
@seldo
Exactly. #Microsoft #Copilot helps developers create security flaws much faster than before!
@eniko