Digital Trends: This invisible technique poisons songs so AI can’t clone them. “The system targets a song’s waveform. My Music My Choice adds microscopic alterations so subtle that you’ll never notice them. Play the track on Spotify and it sounds exactly like the master recording. But feed that file into cloning software and everything breaks.”

https://rbfirehose.com/2026/03/06/digital-trends-this-invisible-technique-poisons-songs-so-ai-cant-clone-them/
Digital Trends: This invisible technique poisons songs so AI can’t clone them

Digital Trends: This invisible technique poisons songs so AI can’t clone them. “The system targets a song’s waveform. My Music My Choice adds microscopic alterations so subtle that you’ll nev…

ResearchBuzz: Firehose
Am I to understand from this that SearXNG is in the process of becoming AI poisoned?

The last issue hasn't been active since 2023 but the 1st one has been active recently and the middle one last summer.

#SearX #SearXNG #SearchEngines #AlternateSearchEngines #MetaSearchEngines #web #dev #tech #FOSS #OpenSource #AI #AIPoisoning #AISlop #AI #GenAI #GenerativeAI #LLM #ChatGPT #Claude #Perplexity
Integrating LLMs into search (link prediction, top-site summarization, stable diffusion images, academic articles) · Issue #2163 · searxng/searxng

There are plenty of amazing solutions for using large language models (LLMs) to help with searching. For sake of compressing this request, I'll point out four kinds of them that I want in a modern ...

GitHub
The Push To Poison AI

YouTube
The Push To Poison AI

Chief Security Fanatic | CISO | Speaker | Columnist | Author | Radio Host | Board Member | Forbes Tech Council | TEDx | Canadian-American

SoundCloud

🎯 AI
===================

Executive summary: Attackers conducted an AI/SEO poisoning campaign that placed malicious ChatGPT and Grok conversations at the top of Google searches for common macOS troubleshooting queries. Victims copied a Terminal command from a legitimate-seeming AI conversation that fetched and executed an AMOS macOS stealer. No phishing email, trojanized installer, or bypass of macOS protections was observed.

Technical details:
• Malware: AMOS (Atomic macOS Stealer) variant observed harvesting passwords, escalating to root, and establishing persistent mechanisms on macOS hosts.
• Initial access: Search-engine poisoning that returned AI-hosted conversations (ChatGPT, Grok) instructing users to run Terminal commands framed as "safe system cleanup."
• Behavior: Silent credential harvesting, privilege escalation, persistence, and data exfiltration to attacker infrastructure (specific C2 domains were not provided in the source).

🔹 Attack Chain Analysis
• Initial Access: AI/SEO poisoning — malicious AI conversations ranked highly for benign queries like "clear disk space on macOS."
• Download/Execution: Victim copied a Terminal command from the AI conversation which downloaded and executed the stealer.
• Privilege Escalation: Observed escalation to root as part of the payload.
• Persistence: Installer created mechanisms to survive reboots and maintain data access.
• Exfiltration: Collected credentials and user data were exfiltrated (telemetry showed data leak activity).

Detection guidance:
• Monitor for unexpected use of Terminal by non-admin users following web searches for benign tasks.
• Alert on processes that spawn network connections shortly after Terminal invocation, and on unusual child processes of bash/zsh/sh.
• Inspect persistence artifacts and anomalous privilege escalations tied to recently executed shell commands.

Limitations and open questions:
• The report reproduces poisoned results across similar queries, but specific C2 indicators and hashes were not disclosed in the summary.
• Attribution and infrastructure details remain undeclared in the provided content.

Takeaway: This campaign demonstrates a shift from malware-hosted lures to weaponizing trusted AI platforms and search rankings to deliver malware via copy-paste commands. #AIpoisoning #AMOS #macOS #search_poisoning #LLM_attack

🔗 Source: https://www.huntress.com/blog/amos-stealer-chatgpt-grok-ai-trust

AI-Poisoning & AMOS Stealer: How Trust Became the Biggest Mac Threat | Huntress

Attackers are exploiting user trust in AI and aggressive SEO to deliver an evolved Atomic macOS Stealer. Learn why this social engineering tradecraft bypasses traditional network controls and the future of macOS infostealer defense.

Huntress
For anyone tracking what's going on with generative AI appearing in the eBook software calibre, the calibre developer seems to be asking us to avoid his software:

In a GitHub issue about adding LLM features:
I definitely think allowing the user to continue the conversation is useful. In my own use of LLMs I tend to often ask followup questions, being able to do so in the same window will be useful.
In other words he likes LLMs and uses them himself; he's probably not adding these features under pressure from users. I can't help but wonder whether there's vibe code in there.


In the bug report:
Wow, really! What is it with you people that think you can dictate what I choose to do with my time and my software? You find AI offensive, dont use it, or even better, dont use calibre, I can certainly do without users like you. Do NOT try to dictate to other people what they can or cannot do.
"You people", also known as paying users. He's dismissive of people's concerns about generative AI, and claims ownership of the software ("my software"). He tells people with concerns to get lost, setting up an antagonistic, us-versus-them scenario. We even get scream caps!

Personally, besides the fact that I have a zero tolerance policy about generative AI, I've had enough of arrogant software developers. Read the room.

#AI #GenAI #GenerativeAI #LLMs #calibre #eBooks #eBookManagers #AISlop #AIPoisoning #InformationOilSpill #dev #tech #FOSS #SoftwareDevelopment
feat: Add LLM tab to Lookup panel by amirthfultehrani · Pull Request #2838 · kovidgoyal/calibre

Dear Kovid, may this pull request find you very well! Following our discussion between each other and peers on MobileRead, I have implemented the proposed LLM integration as a tab in the lookout pa...

GitHub
Ughhhh, et tu calibre?
New features
- Allow asking AI questions about any book in your calibre library. Right click the "View" button and choose "Discuss selected book(s) with AI"
- AI: Allow asking AI what book to read next by right clicking on a book and using the "Similar books" menu
- AI: Add a new backend for "LM Studio" which allows running various AI models locally
Release: 8.16.1 04 Dec, 2025; or here on their GitHub

Calibre is one of those pieces of software that I use from time to time but don't follow closely. I wasn't aware they'd been sipping from the poisoned chalice.

#calibre #FOSS #OpenSource #books #eBooks #eBookManager #AIPoisoning #InformationOilSpill
calibre - What's new

calibre: The one stop solution for all your e-book needs. Comprehensive e-book software.

Qu’est-ce que l’« #AIpoisoning » ou empoisonnement de l’#IA ?
https://theconversation.com/quest-ce-que-l-ai-poisoning-ou-empoisonnement-de-lia-267995
Derrière la puissance apparente de l’#intelligenceartificielle se cache une vulnérabilité inattendue : sa dépendance aux données. En glissant du faux parmi le vrai, des pirates peuvent altérer son comportement – un risque croissant pour la fiabilité et la sécurité de ces technologies
#ia_beurk
Qu’est-ce que l’« AI poisoning » ou empoisonnement de l’IA ?

L’empoisonnement de l’intelligence artificielle n’est pas une métaphore : il s’agit d’une méthode bien réelle pour corrompre les modèles d’IA, comme ChatGPT.

The Conversation
What is AI poisoning? A computer scientist explains | The-14

AI poisoning is when attackers corrupt an AI’s data or code, making it spread errors or misinformation and creating serious security and reliability risks.

The-14 Pictures

Ars Technica: AI models can acquire backdoors from surprisingly few malicious documents. “The research involved training AI language models ranging from 600 million to 13 billion parameters on datasets scaled appropriately for their size. Despite larger models processing over 20 times more total training data, all models learned the same backdoor behavior after encountering roughly the same […]

https://rbfirehose.com/2025/10/19/ars-technica-ai-models-can-acquire-backdoors-from-surprisingly-few-malicious-documents/

Ars Technica: AI models can acquire backdoors from surprisingly few malicious documents | ResearchBuzz: Firehose

ResearchBuzz: Firehose | Individual posts from ResearchBuzz