Twice now I’ve experienced the fallout of bugs in my coworkers code and when I looked into it the bug was introduced by Copilot.

Think about that for a second.

I’m trying to accept that everyone I talk to at work about these systems (I won’t dignify them by using the term “intelligence”) ignores my warnings and treats me like a fool for refusing to use them, but now I have to clean up the mess others make by trusting these things.

This isn’t sustainable.

@requiem It's vastly disappointing how many people (including here) misunderstand both the problems associated with AI and the capabilities of AI in of itself.

* The current capabilities of AI are over-hyped and over estimated. It's fancy pattern recognition. It is by no means intelligent.

* Corporations are abusing it to steal code, art, and thus get rid of jobs.

* AI output is error prone, and always worse than what a skilled human would produce, but bad quality has never stopped a corporation from cheaping out in order to profit.

It is a multiplier in the race to the bottom. Artists, writers, etc,... are all getting massively screwed by having derivatives of their work stolen while at the same time job offers for the more simpler tasks vanish. As if creative people needed another kick while down. And their customers are being screwed by getting worse products in the end.

We're not "scared" of AI because we think it might go skynet on us. It ain't that clever. It's problematic because it gives corporations another way to exploit us. On a massive scale.

And sorry, but "Should have reviewed the code" is a lame excuse. We all know it's harder and slower to properly and thoroughly review code than to write it from scratch, especially for the trivial stuff AI would be used for at this time.

By using AI you're feeding more data to the companies running them which they can assimilate into their models. By using the tools you are accelerating the problem and actively making the world worse.

There is just no reason and no excuse to use AI. Just don't.

@jns @requiem I'm curious whether you think there is any place for such systems? I agree that there are many problems in the theft of IP. And certainly the output of anything complex can be questionable at the moment. Though perhaps not more so than the average Google search brings up.. But as an interactive teaching system, that does a good job of recognising what you're saying/asking and producing some explained output. I think it has great potential. At least I've found value in it.

@makergeek @requiem I think that's asking the wrong question. One can always dream up use cases, but the question we should be asking is does the benefit we get out of those use cases weigh up against the potential (and real) harm.

For instance, I don't work on AI projects, not because I can't think of any use cases, but because it would be endorsing the current hype surrounding it. Companies who are currently using AI for profit rely on that hype in order to get more funding.

So the question is, ultimately, is what I'm doing, ultimately benefiting or harming society. And when it comes to using or developing AI projects, any time I make that balance, it heavily shifts towards harm.

@jns @requiem interesting. Such a broad topic I guess I can see what you're saying. I tend to be something of a techno-optimist. My dabbling with the LLMs has so far been quite positive. But I do see the dangers. Either way I'm not sure the genie is going back in the bottle now.