The use case for AI is to spam
@cendyne We've been using automation-based tools for years to spam people (robocalls, emails, etc.) so it makes perfect sense that AI would be harnessed to do the same thing. Legality, ethics, and common decency be damned when there's a possibility of money to be made. 
@cendyne @cetsch phishing will probably be quite good too 🎣

@RoboticistDuck @cendyne @cetsch I’m just thinking of the scenes from the “Terminator” franchise where the cyborgs are talking on the phone with a clearly fake voice. (“I love you too, sweetie” and “What’s wrong with Wolfie?”)

If I ever get a wierd call I’d just say something nonsensical and see how the family member replies. We also have a “under duress statement” (phrase that sounds plausible but totally false for our family) in a serious emergency.

@DeltaWye @cendyne @cetsch it’s already happening on YouTube. Videos with flashy thumbnails, titles, scripts, voices, graphics and video content all or in part created by AI. Our solution should be to have our AI avatar interact with such beasts.

@RoboticistDuck @cendyne @cetsch I hear that telltale ever so slightly robotic voice and I immediately exit. Most of the time the tell is in the channel or thumbnails. But it’s gotten so so bad and saturated.

The whole “Elsagate” thing several years back that led into those horrible mass-produced “kids videos” was a final warning. YouTube was probably NOT going to be able to properly curate content at all in the future.

@cendyne for those who want to read more, check out the full article by Amy Castor and @davidgerard at https://amycastor.com/2023/09/12/pivot-to-ai-pay-no-attention-to-the-man-behind-the-curtain/
Pivot to AI: Pay no attention to the man behind the curtain

The LLM is for spam

Amy Castor
@cendyne bad actors have the highest motivation to adopt new tools to make their bad plans more productive
@cendyne The use case for AI is spam articles on tech sites about how AI is making too much spam.

@cendyne That's what made me anxious the first time I got to play with GPT-3, I knew it was big shit and it would become hard to tell humans apart from bots.

Now I occasionally click on a video with an obvious AI voice, with text generated by ChatGPT, or find a comment section that looks very sus. This is still very primitive stuff done by lazy spammers, it's only a matter of time until someone creates more sophisticated tools that are fully automated.

And we're just getting started, these models are gaining vision, and there's no real barrier to giving them more modalities to become even more natural without relying on TTS.

Making CAPTCHAs is becoming more difficult.

And then the CEO of "Open"AI comes out with a crypto grift that involves the scanning of people's irises to "address" that in a backwards way.

Now just wait until these advances get used for killing by military and police somewhere and we've got ourselves a complete dystopia, and it's not even cool like in cyberpunk, it's mostly spam.

@cendyne What will be hilarious will be if the AI-enabled search engines start to pick up on this and offer “the use case for AI is to spam” as their actual answer to the question
@cendyne the one up side of all of this is that it seems to be costing the ai firms vast sums to run them, with very little return.
Perhaps they will start having to charge once the VC money runs out and a lot of this rubbish will stop.
Taking ChatGPT on a phishing expedition

Are you sure the person you're chatting with online is real? Recent progress in language models like ChatGPT have made it shockingly easy to create bots that perform phishing operations on users at scale.

adversarial designs