A friend is pissy about Calibre adding A.I. into this ebook manager so they are creating a new fork called Clbre, because the A.I. is being stripped out.
A friend is pissy about Calibre adding A.I. into this ebook manager so they are creating a new fork called Clbre, because the A.I. is being stripped out.
@JustGrist @book Well AI is not AI and in some places AI is useful.
Better call AI a pattern based statistical estimator.
If you want to auto-classify things like e-books an AI is the almost perfect solution for it.
And made right, it not even needs the internet for it.
What's the latest bot-free version, 8.16.1?
I thought it was kʌ'liːbreɪ like kuh-LEE-bray
@tinyrabbit I'm saying it "club-ruh".
🤷♀️
It promotes the narrative that these deliberately deceptive parody generators are "AI".
A large language model is a parody generator. It is not "intelligent", it does not reason, it does nothing but generate a plausible continuation of the prompt.
As an optional extension one can install and use that's fine, but including it by default is something I expect from a Facebook or Google, not an open source project.
Then the release notes are deceptive.
The release notes just say:
"New features
"Allow asking AI questions about any book in your calibre library. Right click the "View" button and choose "Discuss selected book(s) with AI"
"AI: Allow asking AI what book to read next by right clicking on a book and using the "Similar books" menu
"AI: Add a new backend for "LM Studio" which allows running various AI models locally"
OK, technically the last paragraph does say "*allows* running various AI models locally".
The implication is that the feature is on by default.
Like I said, deceptive.
@resuna the AI integration has existed since 8.11. Those release notes are just announcing new features for the AI integration.
I'm stil on 7.4.0 so I haven't seen any of the 8.x release notes before today.
It still feels kind of creepy that it's even in there.
@ohanamatsumae What do you think about the ethical concerns?
@ohanamatsumae @resuna local generative LLMs have one (but perhaps only one?) flaw in common with cloud generative LLMs:
They are generative LLMs.
Edit: Actually, there is a second flaw. Depending on how the model was trained (locally vs elsewhere & downloaded) that can cause ethical, climate, and copyright issues. I'm unsure if local training solves climate issues - but it's unlikely.
@resuna @ohanamatsumae that is one use of a LLM
They can also be used for other legitimate purposes like summarizing a document
Like any new technology LLMs have good and bad uses and anyone judging the technology based on only one side is going to deprive themselves of a potential benefit
A large language model doesn't actually summarize a document, it shortens a document but what it produces is subject to the same shortcomings as anything else that it produces. It produces something that looks like a continuation of text that ends with a request for a summary, but the response may leave out important points or even reverse the sense of the original text, while still looking like a continuation of the prompt.
@resuna @nassau69 @ohanamatsumae I'm still puzzled why this is so damned difficult for so many people to understand.
I mean, sure, the big gen-"AI" companies have doubtlessly got people who fall under Upton Sinclair's observation that "it is difficult to get a man to understand something when his salary depends upon his not understanding it."
But that doesn't explain all the people who use these tools and don't notice the shortcomings.
You've never had an LLM "summarize" a document, and gotten back something that is shorter but a terrible summary?
You've never had an LLM "improve" your text, and gotten back something that, at most, had some grammar mistakes fixed, but had absolutely terrible, cliched style -- possibly much worse than what you started with?
You've never asked an LLM a question about something you know a reasonable amount about, and gotten back absolute garbage, full of bad logic, factual errors, and trite, tedious filler?
It happens to me all the time, and has ever since LLMs became popular three years ago. I simply can't use them for more than 3 or 4 prompts in a row without getting *something* grotesquely wrong, inappropriate for the task, or both.
@ohanamatsumae @book There’s a lot of nuance here. If one holds the opinion that all this stuff is crap, I’ve got their back. If someone else wants to use local execution of a free copy of an LLM model trained with stolen prose to query and interact with their stolen library of ebooks in the self-hosted Calibre instance, I see multiple practical and moral hazards, but I’ve got their back too. Calibre added a bit of UI and data query to allow you to use external models to interact with your library. For some that will be gross. For some it will make things more accessible. I’d ask people to avoid using the hosted LLM apis, but I’ve got hardware privilege.
There are a lot of people getting screwed a lot of ways right now and I am in their corner. But nuance exists too and I’m not gonna say anybody is evil just because they got some of this slime on them. That just radicalizes the confused and curious.
@ohanamatsumae @book I'd say the big deal is the idea that "AI" is needed to manage a book collection.
There are way too many other annoyances* with Calibre as it is, and removing more control** isn't a good idea. IMO, of course.
* It is still under the annoyed-enough-to-throw-it-out level. But getting there. Frozen, tho. No more updates.
** Yes, I WILL argue that even "local AI" will mean it'll DO stuff to your collection you have little control over. It's just so ... unnecessary.
@ohanamatsumae @book because for many AI is bad regardless of how it’s used or implemented
No different than those who thing smart phones bad or any other new technology throughout history