Twice now I’ve experienced the fallout of bugs in my coworkers code and when I looked into it the bug was introduced by Copilot.

Think about that for a second.

I’m trying to accept that everyone I talk to at work about these systems (I won’t dignify them by using the term “intelligence”) ignores my warnings and treats me like a fool for refusing to use them, but now I have to clean up the mess others make by trusting these things.

This isn’t sustainable.

@requiem In these instances, does Copilot also introduce unit/integration tests as part of the code change?

@weiser not that I'm aware of, but these engineers are also using it to write tests generally.

Having a machine you don't understand write your safety net seems to me like asking for it.

@requiem I agree with you. I'm just trying to explore what being responsible with a New Fancy Tool looks like.

E.g. If I see a PR with no unit tests, but a lot of code changed, I might push back on approving the PR until there are useful tests. Might a related softeng discipline keep "blind trust of copilot code" out of main?