Saying that you do not want GenAI in the #books you read, the things you watch or the games you play is an understandable and NORMAL position. Maybe they have ethical concerns, maybe they love their artist homies. Maybe they don't like the bland garbage that AI generates. Stop framing this like an horde of neoluddites is starting the Butlerian Jihad (would be fun doh) just because they do not want to follow a romance autor who has a computer shitting novels instead of writing them herself.

Look, we do not have a lot of ways to avoid that shit in the workplace lately. But people like to select their own fun.

Books about AI are fun.

Books made by AI are not.

And please spare me with the "both sides" argument. Because one of them is trying to force feed things to the other. And one of them has all the money and resources and the other has not. This is not people taking sides. This is people trying for the boot to stop pressing against their face.

You keep mentioning fear of the machines.

Not about that.

You keep mentioning grammar checking and transcription functionality.

Not about that.

Look, we get it. AI sounds hot. We read the same science fiction books and it is nice to think that maybe one day LLM technology can be leveraged against oppression. News about fake open models are fun in a sense because every time one of those pops up, some idiot is going to lose millions yadda yadda. But you are missing a very important point here: a permission structure is being built around us, and stopping it is absolutely crucial
Every time you do a "both sides" stuff between "AI hypers and deniers" you are basically telling me that the person worried about the destruction of their life, their job and the environment has the level of delusion of a person like Peter Thiel, an eldritch horror in a vessel made of flesh that thinks humanity, umm, should not exist.
@berniethewordsmith I am more worried about the absolute mediocre output of ai. For some it might be 'good enough' and that is ok. Please do understand that for most of us 'good enough' is just below our standards. Don't let an ai that is 'good enough' drive your car or have an ai that is 'good enough' do your finances. Accidents will happen and those accidents will be very costly.
@alterelefant I also worry about this. There is certainly an effort to convince people to settle with "just ok" stuff
@berniethewordsmith That 'just ok' might work for certain cases and people also need to respect that it just doesn't work in other cases.
@alterelefant @berniethewordsmith Also, if the "just ok" books are being shat out at several times the rate of actual proper books that have been written by an actual writer, then they become a fire hose that drowns out the good books. It's not just romance, either. I came to the unpleasant realisation that I've recently read some psychological thrillers that are very probably AI-generated, with varying degrees of "author" edits to make them readable. Some of them were ok, albeit with some elements that didn't seem to work that well, one degenerated into an unholy mess for the last 20% of the "novel." I don't know exactly how many AI books I've read, as a lot of it comes down to the rate of publication being too high, and that can be hidden by the use of pseudonyms, or using different publishers. I don't really want to read books that people haven't bothered to write, either fully or in part, but it's going to become more difficult to do that.
@HollieK72
Now the fun starts when a new generation of LLM's is trained on the output of LLM's. What could possibly go wrong?
@berniethewordsmith
@alterelefant @HollieK72 @pluralistic called this "Habsburg AI" and I find the name incredibly fitting
@berniethewordsmith @alterelefant @HollieK72 Stole it from Jathan Sadowski of "This Machine Kills"