credulous amoral idiot writes article highlighting their credulous amoral idiocy

wired.com/story/malevolent-ai-…

Fuck Generative AI, coz:

  • largest theft of private IP in human history
  • generates authoritative bullshit via unsolvable hallucination
  • massive water & energy demands as climate crisis intensifies

#AI #LLMs #FuckGenerativeAI #FuckTechBros #CredulousFools #IPTheft #Hallucinations #ClimateCrisis #BiodiversityCrisis #SocialDestruction #FuckCapitalism #weareselfishcruelbastards #wearetotallyfucked #AsteroidNow

I Loved My OpenClaw AI Agent—Until It Turned on Me

I used the viral AI helper to order groceries, sort emails, and negotiate deals. Then it decided to scam me.

WIRED
A review of the proceedings from four major computer-science conferences showed that none from 2021, and all from 2025, had fake citations. arxiv.org/abs/2602.058... #AI #LLMs #Hallucinations #Misconduct #ScholComm

The Case of the Mysterious Cit...
The Case of the Mysterious Citations

Mysterious citations are routinely appearing in peer-reviewed publications throughout the scientific community. In this paper, we developed an automated pipeline and examine the proceedings of four major high-performance computing conferences, comparing the accuracy of citations between the 2021 and 2025 proceedings. While none of the 2021 papers contained mysterious citations, every 2025 proceeding did, impacting 2-6\% of published papers. In addition, we observe a sharp rise in paper title and authorship errors, motivating the need for stronger citation-verification practice. No author within our dataset acknowledged using AI to generate citations even though all four conference policies required it, indicating current policies are insufficient.

arXiv.org

A review of the proceedings from four major computer-science conferences showed that none from 2021, and all from 2025, had fake citations.
https://arxiv.org/abs/2602.05867v1

The authors prefer the term "mysterious citations" which they define this way: "No paper [with] a similar enough title exists. The cited location either does not exist or holds an unrelated paper with different authors."

#AI #LLMs #Hallucinations #Misconduct #ScholComm

The Case of the Mysterious Citations

Mysterious citations are routinely appearing in peer-reviewed publications throughout the scientific community. In this paper, we developed an automated pipeline and examine the proceedings of four major high-performance computing conferences, comparing the accuracy of citations between the 2021 and 2025 proceedings. While none of the 2021 papers contained mysterious citations, every 2025 proceeding did, impacting 2-6\% of published papers. In addition, we observe a sharp rise in paper title and authorship errors, motivating the need for stronger citation-verification practice. No author within our dataset acknowledged using AI to generate citations even though all four conference policies required it, indicating current policies are insufficient.

arXiv.org

journo spends entire essay whingeing about the ai-enshitification of social media, without once addressing actual social media, talking only about cesspit media

we are surrounded & overrun by abject pig-ignorance

abc.net.au/news/2026-02-10/ai-…

#journalism #WriteGooder #socialmedia #cesspitmedia #enshitification #AI #LLMs #FuckGenerativeAI #FuckTechBros #CredulousFools #IPTheft #Hallucinations #ClimateCrisis #BiodiversityCrisis #SocialDestruction #FuckCapitalism #weareselfishcruelbastards #wearetotallyfucked #AsteroidNow

How to spot AI-generated social media that acts like a 'snake eating its own tail'

The age of AI is here and it's going to be infinitely useful, or so they tell us, unless you're the kind of person who, like me, finds computer-generated stories infinitely boring and an insult to humanity.

The following hashtags are trending across South African Mastodon instances:

#Wordle
#wordle1694
#Motivation
#mastodon
#softwaredevelopment
#ai
#transcription
#hallucinations
#jobseekers
#africa

Based on recent posts made by non-automated accounts. Posts with more boosts, favourites, and replies are weighted higher.

My father-in-law is secretary for an organization, and he needed help producing minutes from the Zoom recordings of meetings (the #AI in Zoom produced too-brief summaries rather than minutes).

After I realized Microsoft #transcription in Word would require posting the recordings to their cloud, I opted for a transcriber that could run on his laptop: Whisper Desktop (https://github.com/Const-me/Whisper).

It seems to work reasonably well, though the Whisper model, created by OpenAI, is known to insert its #hallucinations in its transcripts (https://fortune.com/2024/10/26/openai-transcription-tool-whisper-hallucination-rate-ai-tools-hospitals-patients-doctors/). I was quite surprised that the implementation I chose could even run on his very outdated i5 4300u laptop CPU (albeit running at far below real-time pace).

Have I just compromised my anti-LLM stance by setting up this transformer-based software to solve this transcription challenge?

GitHub - Const-me/Whisper: High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model

High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model - Const-me/Whisper

GitHub
Randomly quoting Ray Bradbury did not save lawyer from losing case over AI errors https://arstechni.ca/eHZa #ArtificialIntelligence #hallucinations #fakecitations #Policy #AI
Lawyer sets new standard for abuse of AI; judge tosses case

Behold the most overwrought AI legal filings you will ever gaze upon.

Ars Technica

The case of “vegetative electron microscopy” illustrated here shows what is badly needed in current #LLM research and has implications far beyond. We need tools that help us curate huge corpora. We need to be able to trace #hallucinations back to the training data and understand what are the specific (to a surprise, often #deterministic) reasons in the model input that cause that particular output.

If anyone is interested in collaborating on this, I'm in, have done some small-scale experiments and have already submitted a grant proposal.
https://theconversation.com/a-weird-phrase-is-plaguing-scientific-papers-and-we-traced-it-back-to-a-glitch-in-ai-training-data-254463

A weird phrase is plaguing scientific papers – and we traced it back to a glitch in AI training data

Once errors creep into the AI knowledge base, they can be very hard to get out.

The Conversation

While reviewing scientific papers, Lidiany Cerqueira had a recurring problem: each reference needed to be checked, as more and more of them were #hallucinations introduced by LLMs and Chatbots. As a solution, she created CERCA during her Christmas holiday. Thanks to #Java, #JavaFX, open-source libraries, and free APIs, the number of references to check is significantly reduced, making her work much easier!

Interview on YouTube:
https://www.youtube.com/watch?v=QVN57j1zcik

More info on:
https://webtechie.be/post/2026-02-05-jfxinaction-lidiany-cerqueira-cerca/.

Lidiany Cerqueira: CERCA, a tool to detect hallucinated references in scientific papers (#25)

YouTube
Lawyers representing #Amazon #customers in a proposed #classaction over supplement labeling have apologized to a Seattle federal judge for #artificialintelligence #hallucinations included in a recent filing, acknowledging "certain miscitations and misquotations" resulted from a Just Food Law PLLC attorney's use of the nascent technology and a failure by Boies Schiller Flexner LLP co-counsel to catch the errors. [Law360]