I wish I didn't auto delete my toots sometimes, as I predicted this about 6 months ago...

People are injecting malware responses into Microsoft's AI, so now when you ask it questions it is serving people malware downloads. https://www.bleepingcomputer.com/news/security/bing-chat-responses-infiltrated-by-ads-pushing-malware/

Bing Chat responses infiltrated by ads pushing malware

Malicious advertisements are now being injected into Microsoft's AI-powered Bing Chat responses, promoting fake download sites that distribute malware.

BleepingComputer

From that thread (RIP), expect nation states, SEO spammers and more to be filling generative AI with crap to install malware, influence policy documents, research etc etc.

It's absolutely the next stage of enshittification (sorry, I mean increasing shareholder value) where everybody can pretend to shocked it happened in two years.

@GossiTheDog it sees the shit and is learning it. Makes sense.
@johnefrancis @GossiTheDog Doing exactly what it was made to do, learn. It can't unlearn either, unless they have backups of it to load.
@jackemled @GossiTheDog yeah, it's hard to see how anyone can claim to have undone the copyright violations they committed during training. So I guess they'll just have to license at whatever term the creator wants.

@johnefrancis @GossiTheDog "ooh it's not stored intact, the ai shreds it", so you admit it's not intelligent, & that it's just an idea blender?
"nooo it's original it makes new things, inspired by what it remembers", so it does involve stealing copyrighted material?

I wish they would at least make up their minds & be consistent with their bullshit.

@jackemled @GossiTheDog I guess we'll find out when it starts spitting out mouse ears and Like Skywalkish and the AI companies eat Disney's lunch.
@johnefrancis @GossiTheDog I hope it's soon!
@jackemled @johnefrancis @GossiTheDog somebody needs to develop a robust multistage ML poisoning scene. Single stage to make it respond with desired responses to targeted keywords already exists but seems too easy to revert, but if you can slowly plant the malicious responses over time and "activate" them by publishing another final set of seemingly irrelevant samples then you can plant all kinds of malicious responses that are VERY hard to delete from the model
@GossiTheDog Note to Self: Work on “surprised” face.
@GossiTheDog I've been going out of my way to avoid "AI generated content," which is usually stale and often incorrect. I don't like how Microsoft (Bing, Edge) and others are shoving it down everyone's throats.
@GossiTheDog surprised? Nope. I was aware about nature of "Innovative capitalism" for some 8 years now. The more I learn the more I realize how dumb and opportunistic it is by nature. And tbh I do not blame them. I blame the media and gov basically following their lead. They have power, but they did not care. All they cared about was steak at the CEO summer house party.
But some of us (well, I feel personally myself) also share the blame for not always being vigilant about this.

@GossiTheDog When you’re right you’re right.

Just curious, why do people auto-delete posts?

@earthlingusa @GossiTheDog Search engines crawling sites, sometimes even when told not to. Techbros doing the same thing to train LLMs.
@GossiTheDog just wait until AI starts getting applied to discovering exploits.

@GossiTheDog dude it’s a complete shit show. I tried to get a bounty for jailbreaking the model through bings GPT4 interface. Microsoft sent me to openAI. OpenAI just kept saying it was fixed regardless of the screenshots and videos I was sending them. Literally just refusing to accept it.

Let’s just bury our heads in the sand and maybe the problem with go away. So this isn’t the least bit surprising.