A friend is pissy about Calibre adding A.I. into this ebook manager so they are creating a new fork called Clbre, because the A.I. is being stripped out.

https://github.com/grimthorpe/clbre

#AI #AISlop #Calibre #eBook #eBookManager

GitHub - grimthorpe/clbre: A fork of Calibre called Clbre, because the AI is stripped out.

A fork of Calibre called Clbre, because the AI is stripped out. - grimthorpe/clbre

GitHub
@book *sigh* Yet again I don't understand why so-called AI is being added at all. Calibre literally exists for people who want more *manual* control of their ebook library. It already has batch processing, so why the hell would so-called AI even be helpful in any way? Probably preaching to the choir here, I know. lol

@JustGrist @book Well AI is not AI and in some places AI is useful.

Better call AI a pattern based statistical estimator.

If you want to auto-classify things like e-books an AI is the almost perfect solution for it.

And made right, it not even needs the internet for it.

@book
For those interested, earlier releases of #calibre are available here:-

https://download.calibre-ebook.com/8.html

Previous calibre releases (8.x)

@mtconleyuk @book

What's the latest bot-free version, 8.16.1?

@resuna @book
It appears to be 8.10.0.
@resuna
Earlier, I was told it was 8.10 so that's what I regressed to (8.10.0)
@mtconleyuk @book
@book How's it pronounced, though?
@tinyrabbit @book Sounds like Spanish… clibre would be pronounced “se libre” which means “be free”
@tinyrabbit
Kuhl-bre?
...
Cool-bre? :P
@book

@wachoperro @tinyrabbit @book

I thought it was kʌ'liːbreɪ like kuh-LEE-bray

@tinyrabbit I'm saying it "club-ruh".

🤷‍♀️

@book Damn, I'm glad the Ubuntu snap store version I'm using is 5.x something.

Fucking #clankers do not have to be in every damn thing.

@book I hadn't realized they were adding LLM shit. Good for grimthorpe!
@book Thanks for the heads up. Froze package version as soon as I confirmed it
@book Is there a patreon or some sort of donation link for your friend. Im not a python person, but would like to throw some reoccurring donations at those who are, to work on a fork.
@trashheap @book
No patreon or donation links yet - if I manage to get something working then I might consider it, but really I am hoping that I don't have to do that much work.
@book I don't get why this is such a big deal? After some research, it looks like they're using local AI models, with the option to use third parties. But... the third parties aren't included by default, you have to go out of your way to enable them. Local AI where nothing gets sent to the server is fine imo

@ohanamatsumae @book

> Local AI where nothing gets sent to the server is fine imo

No it's not.

@resuna okay. explain why?

@ohanamatsumae

It promotes the narrative that these deliberately deceptive parody generators are "AI".

@resuna ...what?

@ohanamatsumae

A large language model is a parody generator. It is not "intelligent", it does not reason, it does nothing but generate a plausible continuation of the prompt.

@resuna this doesn't explain why local AI models aren't fine to use

@ohanamatsumae

As an optional extension one can install and use that's fine, but including it by default is something I expect from a Facebook or Google, not an open source project.

@resuna it's technically off by default. The AI integration is useless until you install and setup the proper extension. The local AI, for example, requires Ollama to be installed

@ohanamatsumae

Then the release notes are deceptive.

@resuna I can't say for certain. I don't have Calibre installed, but from what I've read of the new features from some online news sites, that's what I've gathered. There's options for GitHub(???), Google, some other one, and local AI. None of them are set up to be used right away. I think the local integration extension is pre-installed, but, like I said, it needs Ollama

@ohanamatsumae

The release notes just say:

"New features

"Allow asking AI questions about any book in your calibre library. Right click the "View" button and choose "Discuss selected book(s) with AI"

"AI: Allow asking AI what book to read next by right clicking on a book and using the "Similar books" menu

"AI: Add a new backend for "LM Studio" which allows running various AI models locally"

OK, technically the last paragraph does say "*allows* running various AI models locally".

@ohanamatsumae

The implication is that the feature is on by default.

calibre/src/calibre/ai/prefs.py at master · kovidgoyal/calibre

The official source code repository for the calibre ebook manager - kovidgoyal/calibre

GitHub

@resuna the AI integration has existed since 8.11. Those release notes are just announcing new features for the AI integration.

https://itsfoss.com/news/ai-comes-to-calibre/

AI Comes to Open Source eBook Reader Calibre

Calibre levels up with an AI feature and other changes.

It's FOSS

@ohanamatsumae

I'm stil on 7.4.0 so I haven't seen any of the 8.x release notes before today.

It still feels kind of creepy that it's even in there.

@resuna I agree that it's definitely unnecessary. But that's just kind of the environment we're in, currently. The AI bubble will pop sooner or later, and this'll be happening a lot less. Compared to how open source products usually integrate AI (looking at you, Mozilla), this is largely inoffensive. Definitely not worth the panic I'm seeing people get into.

@ohanamatsumae What do you think about the ethical concerns?

@resuna

@nikclayton I think you should do research on the local models you use and double check to see if they align with your moral views. There are thousands of local models (okay, maybe only hundreds) of local models out there to use

@ohanamatsumae @resuna local generative LLMs have one (but perhaps only one?) flaw in common with cloud generative LLMs:

They are generative LLMs.

Edit: Actually, there is a second flaw. Depending on how the model was trained (locally vs elsewhere & downloaded) that can cause ethical, climate, and copyright issues. I'm unsure if local training solves climate issues - but it's unlikely.

@kboyd in terms of draining locally, would it not draw the same amount of power as playing a modern AAA game for a few hours? I haven't trained AI myself, but from what I remember, it's the GPU that does all the heavy lifting
@ohanamatsumae the benefit of a downloadable training set is that it can be calculated once and then distributed, as opposed to each locally computed one having to be trained. In this particular case, local training would overall be a climate negative due to the duplicated efforts.

@resuna @ohanamatsumae that is one use of a LLM

They can also be used for other legitimate purposes like summarizing a document

Like any new technology LLMs have good and bad uses and anyone judging the technology based on only one side is going to deprive themselves of a potential benefit

@nassau69 @ohanamatsumae

A large language model doesn't actually summarize a document, it shortens a document but what it produces is subject to the same shortcomings as anything else that it produces. It produces something that looks like a continuation of text that ends with a request for a summary, but the response may leave out important points or even reverse the sense of the original text, while still looking like a continuation of the prompt.

@resuna @nassau69 @ohanamatsumae I'm still puzzled why this is so damned difficult for so many people to understand.

I mean, sure, the big gen-"AI" companies have doubtlessly got people who fall under Upton Sinclair's observation that "it is difficult to get a man to understand something when his salary depends upon his not understanding it."

But that doesn't explain all the people who use these tools and don't notice the shortcomings.

You've never had an LLM "summarize" a document, and gotten back something that is shorter but a terrible summary?

You've never had an LLM "improve" your text, and gotten back something that, at most, had some grammar mistakes fixed, but had absolutely terrible, cliched style -- possibly much worse than what you started with?

You've never asked an LLM a question about something you know a reasonable amount about, and gotten back absolute garbage, full of bad logic, factual errors, and trite, tedious filler?

It happens to me all the time, and has ever since LLMs became popular three years ago. I simply can't use them for more than 3 or 4 prompts in a row without getting *something* grotesquely wrong, inappropriate for the task, or both.

@ohanamatsumae Some people refuse the use of AI functionality altogether, anyhow and anywhere, including the term. (Mostly in opposition to LLMs, not Deep Learning in general.) For ethical or ecological or political or social reasons. Personally, I can see the point(s), and certainly accept that kind of opposition, but I think it’s hard to keep up even now and will become harder still as LLMs evolve.

@ohanamatsumae @book There’s a lot of nuance here. If one holds the opinion that all this stuff is crap, I’ve got their back. If someone else wants to use local execution of a free copy of an LLM model trained with stolen prose to query and interact with their stolen library of ebooks in the self-hosted Calibre instance, I see multiple practical and moral hazards, but I’ve got their back too. Calibre added a bit of UI and data query to allow you to use external models to interact with your library. For some that will be gross. For some it will make things more accessible. I’d ask people to avoid using the hosted LLM apis, but I’ve got hardware privilege.

There are a lot of people getting screwed a lot of ways right now and I am in their corner. But nuance exists too and I’m not gonna say anybody is evil just because they got some of this slime on them. That just radicalizes the confused and curious.

@ohanamatsumae @book I'd say the big deal is the idea that "AI" is needed to manage a book collection.

There are way too many other annoyances* with Calibre as it is, and removing more control** isn't a good idea. IMO, of course.

* It is still under the annoyed-enough-to-throw-it-out level. But getting there. Frozen, tho. No more updates.

** Yes, I WILL argue that even "local AI" will mean it'll DO stuff to your collection you have little control over. It's just so ... unnecessary.

@ohanamatsumae @book because for many AI is bad regardless of how it’s used or implemented

No different than those who thing smart phones bad or any other new technology throughout history

@book that's just perfect!