Twice now I’ve experienced the fallout of bugs in my coworkers code and when I looked into it the bug was introduced by Copilot.

Think about that for a second.

I’m trying to accept that everyone I talk to at work about these systems (I won’t dignify them by using the term “intelligence”) ignores my warnings and treats me like a fool for refusing to use them, but now I have to clean up the mess others make by trusting these things.

This isn’t sustainable.

@requiem you’re not the only person noticing this, unfortunately: https://arxiv.org/pdf/2211.03622.pdf (TL:DR, study participants who wrote AI assisted code wrote code which contained more security vulnerability in tests *and* self-assessed their code as more secure, in comparison to participants who wrote their code independently)
@Satsuma I knew this would be bad, but I didn’t think it would get this bad this fast.

@requiem The grey-gooifying of the internet Commons will happen shockingly fast now.

@Satsuma

@Satsuma @requiem Some classic Dunning-Kruger there. The machine is 100% confident!
@Satsuma @requiem Did anyone counter that the real problem was not applying enough AI-assisted debugging to the AI-assisted writing of bugs?

@clacke @requiem @Satsuma Make it AI-assisted verification and we can at least talk seriously...

...about making sure we understand what properties we specified!

@Satsuma: I dub this effect the Artificial Dunning & Kruger Phenomenon, or AD&KP.

@requiem

@Satsuma @requiem I was talking to a journalist from Nature Technology and said that I wouldn't trust Copilot-generated code further than I can throw it...