For anyone tracking what's going on with generative AI appearing in the eBook software calibre, the calibre developer seems to be asking us to avoid his software:

In a GitHub issue about adding LLM features:
I definitely think allowing the user to continue the conversation is useful. In my own use of LLMs I tend to often ask followup questions, being able to do so in the same window will be useful.
In other words he likes LLMs and uses them himself; he's probably not adding these features under pressure from users. I can't help but wonder whether there's vibe code in there.


In the bug report:
Wow, really! What is it with you people that think you can dictate what I choose to do with my time and my software? You find AI offensive, dont use it, or even better, dont use calibre, I can certainly do without users like you. Do NOT try to dictate to other people what they can or cannot do.
"You people", also known as paying users. He's dismissive of people's concerns about generative AI, and claims ownership of the software ("my software"). He tells people with concerns to get lost, setting up an antagonistic, us-versus-them scenario. We even get scream caps!

Personally, besides the fact that I have a zero tolerance policy about generative AI, I've had enough of arrogant software developers. Read the room.

#AI #GenAI #GenerativeAI #LLMs #calibre #eBooks #eBookManagers #AISlop #AIPoisoning #InformationOilSpill #dev #tech #FOSS #SoftwareDevelopment
feat: Add LLM tab to Lookup panel by amirthfultehrani · Pull Request #2838 · kovidgoyal/calibre

Dear Kovid, may this pull request find you very well! Following our discussion between each other and peers on MobileRead, I have implemented the proposed LLM integration as a tab in the lookout pa...

GitHub
Ughhhh, et tu calibre?
New features
- Allow asking AI questions about any book in your calibre library. Right click the "View" button and choose "Discuss selected book(s) with AI"
- AI: Allow asking AI what book to read next by right clicking on a book and using the "Similar books" menu
- AI: Add a new backend for "LM Studio" which allows running various AI models locally
Release: 8.16.1 04 Dec, 2025; or here on their GitHub

Calibre is one of those pieces of software that I use from time to time but don't follow closely. I wasn't aware they'd been sipping from the poisoned chalice.

#calibre #FOSS #OpenSource #books #eBooks #eBookManager #AIPoisoning #InformationOilSpill
calibre - What's new

calibre: The one stop solution for all your e-book needs. Comprehensive e-book software.

I sent feedback to Atlassian yesterday complaing about the "AI Assistant" they recently added to JIRA. Apparently you need an admin to disable the feature for everyone and cannot disable it for yourself. A very suspicious dark pattern.

There's also an "AI summary" feature for tickets that looks very easy to accinetally select. Another dark pattern.

Very disappointed to see this.



#JIRA #Atlassian #AI #AISlop #InformationOilSpill #DarkPatterns
Speaking of widespread low-quality scientific publication and the need to take care with words: https://retractionwatch.com/2025/02/10/vegetative-electron-microscopy-fingerprint-paper-mill/
The phrase was so strange it would have stood out even to a non-scientist. Yet “vegetative electron microscopy” had already made it past reviewers and editors at several journals when a Russian chemist and scientific sleuth noticed the odd wording in a now-retracted paper in Springer Nature’s Environmental Science and Pollution Research.

Today, a Google Scholar search turns up nearly two dozen articles that refer to “vegetative electron microscopy” or “vegetative electron microscope,” including a paper from 2024 whose senior author is an editor at Elsevier, Retraction Watch has learned. The publisher told us it was “content” with the wording.
Note the presence of Nature publishing group, notorious lately for their low-quality AI slop or AI-boosterism, and Elsevier, who is generally terrible.

#AI #GenerativeAI #LLM #AISlop #InformationOilSpill #AcademicPublishing #ScientificPublishing #PaperMill #PeerReview
As a nonsense phrase of shady provenance makes the rounds, Elsevier defends its use

The origin of the phrase? The phrase was so strange it would have stood out even to a non-scientist. Yet “vegetative electron microscopy” had already made it past reviewers and editors at several j…

Retraction Watch
WikiProject AI Cleanup
Welcome to WikiProject AI Cleanup—a collaboration to combat the increasing problem of unsourced, poorly-written AI-generated content on Wikipedia. If you would like to help, add yourself as a participant in the project, inquire on the talk page, and see the to-do list.
From https://en.wikipedia.org/wiki/Wikipedia:WikiProject_AI_Cleanup

#InformationOilSpill #LLM #GenAI #GenerativeAI #Wikipedia
Wikipedia:WikiProject AI Cleanup - Wikipedia

Minutes between seeing the "Try Gemini" button in Gmail and authoring a uBlock Origin filter to obliterate it: 5

#NoAI #AI #GenerativeAI #InformationOilSpill #Google #Gemini
I heard of this through @[email protected] 's newsletter and radio show but had to try it myself. The terrible results are still up.

This example is funny and absurd, but imagine people who are seeking advice for serious issues--medical, legal, financial--being shown the equivalent of "applum". Though we don't have this concept in tech, I'd characterize showing people results like this as malpractice. It almost surely runs afoul of standard technology ethics guidelines, which typically have something like "do no harm" near the top of the list.

I worked on large language models in the context of a startup company in the 2016-2019 timeframe ( https://bucci.onl/notes/Legit.ai ). It's one of the reasons I comment so frequently and so negatively on this technology here--I've worked with it, at least the generation of it from that time (much has changed in the meantime, though not at the fundamental level). We experimented a bit with natural language generation. I concluded at the time that it was nowhere near ready for prime time even in the restricted domain in which we were operating. Despite Google's vast computational resources, gigantic troves of data, and the intervening 5+ years of breakthroughs in this technology, its generative AI here is still not ready for prime time as far as I can see. Never say never, but I don't think it ever will be unless the application domain is well-scoped, which general web search is not.

The really sad part? There is non-LLM-based technology that can do this sort of thing pretty well when scoped carefully. Less embarrassingly badly than what Google is demonstrating here, that's for sure. There are folks within Google, or at least there used to be, who are well aware of this.

#Google #AI #GenerativeAI #GenAI #AIOverview #tech #PinkSlime #AIGoop #InformationOilSpill
Legit.ai

Anthony Bucci's personal web site

Anthony Bucci
This is truly remarkable. How are these projects being deployed given the exceptionally low quality of the output? I can't begin to imagine the depth of dysfunction at Google that would lead to something like this going live on their home page. Dangerous, embarrassing, and frankly sad stuff.

Note: I put it in the alt text of the image, but just to make sure it's clear: solanum is definitely not the scientific name for a tomato. It's a large grouping of flowering plants that includes tomatoes but also includes eggplants and potatoes. This result is a category error, like saying "machine" is another word for "car". It's pretty simple to avoid these in more conventional, grammar-based natural language generation systems. If Google cared to, they could filter outputs using lexical information and a bit of shallow parsing and disambiguation and avoid a decent fraction of weird results like these. Or, if they want to use only data-driven techniques, which they seem hellbent on pursuing, they could use Wikipedia.

#Google #AI #GenerativeAI #GenAI #AIOverview #tech #PinkSlime #AIGoop #InformationOilSpill
The information oil spill caused by #GenAI continues to claim casualties: https://www.404media.co/bards-and-sages-closing-ai-generated-writing/
In a notice posted to the [Bards And Sages Publishing] site, founder ​Julie Ann Dawson wrote that effective March 6, she was winding down operations to focus on her health and “day job” that’s separate from the press. “All of these issues impacted my decision. However, I also have to confess to what may have been the final straws. AI...and authors behaving badly,” she wrote.
Closure announcement: https://www.bardsandsages.com/closure-announcement.html

#ChatGPT #GPT #LLaMa #Gemini #AI #GenerativeAI #GenAI #InformationOilSpill #Publishing #Fiction
Flood of AI-Generated Submissions ‘Final Straw’ for Small 22-Year-Old Publisher

“The problem with AI is the people who use AI. They don't respect the written word,” the founder of Bards and Sages said.

404 Media