Twice now I’ve experienced the fallout of bugs in my coworkers code and when I looked into it the bug was introduced by Copilot.

Think about that for a second.

I’m trying to accept that everyone I talk to at work about these systems (I won’t dignify them by using the term “intelligence”) ignores my warnings and treats me like a fool for refusing to use them, but now I have to clean up the mess others make by trusting these things.

This isn’t sustainable.

@requiem I don't think LLM's as dev tools are going away but I do think there is a lot of growth of all of us in understanding how to use these tools in a responsible and ethical manner.
@davidshq I would argue that it's impossible to use them in an ethical manor. They are built from the bodies of other programmers, and will continue to consume us and displace us while yielding worse software exponentially as they consume their own output.
@requiem In what way are they built from the bodies of other programmers? Do you mean the use of open source code to feed the LLM's? If so, I agree to some extent. It seems to me that allowing one's code to be consumed by an LLM should be opt-in rather than opt-out...and I suspect we will see many open source licenses updated to include clauses forbidding the use of the code within LLM contexts or something similar.

@davidshq precisely.

I created a license expressly for this reason a year or so ago, but everyone thought I was nuts 🤣

@requiem @davidshq Not nuts, just ineffectual, as so far nobody seems to be challenging their legal theory that training data does not create derivatives in the legal sense.