Saying that you do not want GenAI in the #books you read, the things you watch or the games you play is an understandable and NORMAL position. Maybe they have ethical concerns, maybe they love their artist homies. Maybe they don't like the bland garbage that AI generates. Stop framing this like an horde of neoluddites is starting the Butlerian Jihad (would be fun doh) just because they do not want to follow a romance autor who has a computer shitting novels instead of writing them herself.

Look, we do not have a lot of ways to avoid that shit in the workplace lately. But people like to select their own fun.

Books about AI are fun.

Books made by AI are not.

And please spare me with the "both sides" argument. Because one of them is trying to force feed things to the other. And one of them has all the money and resources and the other has not. This is not people taking sides. This is people trying for the boot to stop pressing against their face.

You keep mentioning fear of the machines.

Not about that.

You keep mentioning grammar checking and transcription functionality.

Not about that.

Look, we get it. AI sounds hot. We read the same science fiction books and it is nice to think that maybe one day LLM technology can be leveraged against oppression. News about fake open models are fun in a sense because every time one of those pops up, some idiot is going to lose millions yadda yadda. But you are missing a very important point here: a permission structure is being built around us, and stopping it is absolutely crucial
Every time you do a "both sides" stuff between "AI hypers and deniers" you are basically telling me that the person worried about the destruction of their life, their job and the environment has the level of delusion of a person like Peter Thiel, an eldritch horror in a vessel made of flesh that thinks humanity, umm, should not exist.
This is Mastodon. There is people here that can install GotoSocial in a smart oven toaster and then proceed to launch it in low orbit just for fun. Please stop making allusions to technophobia. It's irritating and disrespectful
@berniethewordsmith I actually just started my own GotoSocial instance a few days ago, and I only use #AI in really limited ways. BTW: My account there is @gtsadmin . Would like to make new friends there.
@d1 @gtsadmin What kind of machine you have it running on?
@berniethewordsmith @gtsadmin Just a cheap VPS with Akamai. 1GB of RAM.
@berniethewordsmith technophobia, when half of my room is cables from one side to another, and 4 desktop computers, 3 laptops and 4 netbooks in different stages of "working", 2 raspberry pi's, 3 conosoles that also run Linux and also the wii running windows NT 4 with wordperfect just because fuck it, it's cool. I fucking love puters.
@cygnathreadbare Which one is the most bulky?

@berniethewordsmith bulkiest destkop, my newest one, a ryzen 5 with rtx 5070, it's not huge but it's like 10 cm taller and 5cm wider than my old one. Bulkiest laptop, a 2008 celeron upgraded to core2duo, now inactive because the charger died two months ago (probably because the core2duo takes almost twice the power).

Still should have an even bulkier desktop at my old place, a 2005-ish athlon I set up on a tower I found in the garbage, never measured it but was like 2x standard pc towers. It was empty except some SCSI cables so I guess it was used for (literal) mass storage or mass cd/dvd burning.

@cygnathreadbare I have a lot of respect for your craft
@berniethewordsmith Excellent thread Bernie. Rather than technophobes I'd class those sounding the warnings and rejecting LLMs/"AI"/Emporers New Clothes as the Canary in the mine.
I've been involved with computing/programming since the early 1980s and have been enthusiastic about many of the technological developments. Most of the tech was used to produce tools which we could use to help get stuff done. We knew how it worked, and outcomes could be tested and predicted. LLM are not this!
@MyricaGale A lot of very smart people I know are constantly tapping the sign about the non-deterministic nature of these models. I see they are tapping the sign louder and louder every time there is some news about healthcare applications and stuff
@berniethewordsmith
definitely no lack of technical ability around these parts... #antiai
@noiseician I would not mess with people capable of accessing AO3 from a metro ticket vending machine

@berniethewordsmith might be good to include that you're talking about (american, big-tech) LLMs  

We've used various forms of ML&AI for decades, everything from video-games to produce logistics and vaccine research. It's a group of technologies not singular thing (LLMs is not the whole group!)

@iamada Yes, I am referring specifically to this kind of Generative AI that is trying to be shoehorned everywhere. Alien Isolation is a good example of an AI in a videogame that is absolutely incredible. AI slop is not
@iamada
Machine Learning has been around for a while and is proven technology. Indeed not to be mistaken for LLM's.
@berniethewordsmith
@alterelefant @iamada Yeah, I'm definitely referring to this LLM in the soup nightmare we are currently in
@berniethewordsmith
The output of an LLM has to be thoroughly checked by a subject matter expert. Failing to do so will make things go south very quickly.
@iamada
@iamada @alterelefant @berniethewordsmith which renders it useless. In fact worse than useless as time and effort of specialists is diverted from more useful work.
@seb321
In some cases it might save a couple of minutes here and there but it is definitely not the revolution they try to make us believe it is. At least if you value the quality and correctness of your work.
@iamada @berniethewordsmith
@alterelefant @berniethewordsmith @iamada it reminds me of the scene in Blade Runner where Tyrell explains to Roy how his life is inhibited and any way around the inhibitor leads back to the same result. Any argument for how a flawed LLM can be used by human intervention renders the function obsolete.

@iamada @berniethewordsmith
It's too late. Tech bros have co-opted the AI term. If you don't want to be confused for them, use more specific terms.

Say pattern recognition, not AI.
Say grammar check, not AI.
Say NPC logic, not AI.

Coronavirus Name

xkcd

@matildalove it can be easy to think so, but then we're also having good projects get squashed and undeserving people receive misdirected hate 🤷‍♀️

Meanwhile ChatGPT integrated bullshit is taking all of the funding and spreading like a wildfire 

@iamada

i say this with all respect in the world. you're full of shit right now.

the thread you are responding to is quite specific, they're talking about GenAI content in what should be human creative works. you're just concern trolling here.

@berniethewordsmith I am more worried about the absolute mediocre output of ai. For some it might be 'good enough' and that is ok. Please do understand that for most of us 'good enough' is just below our standards. Don't let an ai that is 'good enough' drive your car or have an ai that is 'good enough' do your finances. Accidents will happen and those accidents will be very costly.
@alterelefant I also worry about this. There is certainly an effort to convince people to settle with "just ok" stuff
@berniethewordsmith That 'just ok' might work for certain cases and people also need to respect that it just doesn't work in other cases.
@alterelefant @berniethewordsmith Also, if the "just ok" books are being shat out at several times the rate of actual proper books that have been written by an actual writer, then they become a fire hose that drowns out the good books. It's not just romance, either. I came to the unpleasant realisation that I've recently read some psychological thrillers that are very probably AI-generated, with varying degrees of "author" edits to make them readable. Some of them were ok, albeit with some elements that didn't seem to work that well, one degenerated into an unholy mess for the last 20% of the "novel." I don't know exactly how many AI books I've read, as a lot of it comes down to the rate of publication being too high, and that can be hidden by the use of pseudonyms, or using different publishers. I don't really want to read books that people haven't bothered to write, either fully or in part, but it's going to become more difficult to do that.
@HollieK72
Now the fun starts when a new generation of LLM's is trained on the output of LLM's. What could possibly go wrong?
@berniethewordsmith
@alterelefant @HollieK72 @pluralistic called this "Habsburg AI" and I find the name incredibly fitting

@berniethewordsmith
Yes, that one. Always something fun to read.

From what I understand this is a great challenge for all companies out there trying to assemble a high quality dataset to train the next generation of ai. Humans generate new content however ai generates much more new content and you simply do not want ai generated output to end up in your ai trainingset as you know the quality of that content is sub par and unreliable. The irony.

https://en.wikipedia.org/wiki/Model_collapse

@HollieK72 @pluralistic

Model collapse - Wikipedia

@alterelefant
If AI dominates completely, how will people be able to free themselves from it?
@nuwagaba2
My fear is that we are already past that point.
@alterelefant
There's no hope for freedom from AI?

@nuwagaba2
I have several reasons.

Music platforms like #Spotify are being bombarded by #ai generated music and I get the impression they are not interested in stopping that massive influx of mediocre content.

Online comment sections in some places are flooded by bots. It has become almost impossible to distinguish between a human and a bot as #LLM's are able to convincingly mimic the behavior of average humans in comment sections. We can pick up on two bots, but identifying hundreds of them?

@alterelefant
Who benefits more from this when AI is becoming inevitable?
@nuwagaba2
It is not the humans in the comment sections who benefit from this.
@alterelefant
Exactly, if humans knew that AI is meant not to serve them but to replace them , they would do something to minimise it . Can I share with you about my project?
@nuwagaba2
Sure, is what you created somewhere in the public domain?

@alterelefant

It's about fighting hunger in my community, I work with 12 young volunteer farmers to make this a reality by growing food for the needy, educating local farmers with advanced agricultural skills to help them improve on their production as well as equipping beginner farmers with required tools like seeds , fertilisers and organic pestcides to help them produce the best out of their gardens as well combat climate change through tree planting. Do you have such initiatives there?

@nuwagaba2
Here in The Netherlands agriculture is highly mechanised, not to say industrialized. I am sure there are collaborative projects out there, it is however not my area of operation.
@alterelefant
That's okay. If I had an opportunity, I wouldn't hesitate to connect with them. Our initiative is currently under a crisis where we have been forced to evacuate the land that we have been renting to do our activities. We're seeking funds to buy our own land to make our dream a reality. Your support either through sharing or donating to our compaign would make a great impact
https://gofund.me/3f38fe9d0
@berniethewordsmith @alterelefant @HollieK72 Stole it from Jathan Sadowski of "This Machine Kills"
@alterelefant @berniethewordsmith AI? No, thanks  Literature made by AI has no meaning and no purpose.
@deadrobot
Texts not worth writing are texts not worth reading.
@berniethewordsmith

@alterelefant @berniethewordsmith A small recent example that could have ended in tragedy : https://www.surfertoday.com/environment/chatgpt-wrong-tide-times-wales-rescue

Belief in "AI" information is a shortcut to obtaining the Darwin Award....

Swimmers saved after trusting ChatGPT's tide advice in South Wales

The pair were rescued near Sully Island, just off the coast of Swanbridge, an area with the second-highest tidal range in the world. Lesson learned: don't trust AI for tide information.

SurferToday.com | The Ultimate Surfing News Website
@MyricaGale
Yes, why would you source official information when you can have ai lie to you?
@berniethewordsmith
@alterelefant @MyricaGale At least in previous iterations of "was online, must be true" you had an actual creative human inventing the bullshit. There was passion in the lie 😂

@berniethewordsmith
When people try to convince the reader that something is true where it isn't, your 'bullshit meter' should be able to pick up on that and say, hold on, that doesn't add up.

It feels like some people are not critical enough towards #LLM's pulling the same tricks as it mimics human text writing. Those individuals probably also weren't too critical about those human generated stories to begin with.
@MyricaGale

@alterelefant @berniethewordsmith When I was a student in the late 1970s I played with the ELIZA program on the Uni's mainframe system. After the initial phase of 'how does it do that' it became a bit repetitive and tedious. The main fun to be gained from it was introducing others to it and seeing how many thought they were actually communicating with something 'intelligent'. From my (very limited) usage of MetaAI I feel the same process in action (on a larger scale).
@alterelefant @berniethewordsmith And nobody will be responsible for those accidents, because the AI itself can't be responsible, and the people who make it, sell it, and market it as able to do things it can't really do will be able to get off consequence-free because of an asterisk in the fine print.
@Linebyline
Yes, accountability doesn't seem to be a thing here.
@berniethewordsmith
I think you're misrepresenting the "both sides" people. Sure, some are just like the "just asking questions" people, but the LLM/AI discussion has been derailed.

The LLM peddlers are hyping up their machines as general AI that is perpetually 2 months away from achieving sentience. It probably started as naive optimism and has devolved into a need to keep stocks up to justify insane bubble-like valuations.

Simultaneously, people against it insist it's nothing more than a T9 dictionary without any merit or use. That is a false dichotomy and IMO disingenuous as it gives the LLM peddlers an easy counter: LLMs are more than prediction machines, even if they are not the super-intelligences the other side claims.

LLMs (and similar generative neural networks) are useful for certain things. Photoshop is augmented with tools for cleaning up images. If you pay "a real artist" to make an image "without AI," you can be 100% certain they have used "AI" as part of their process. Textual LLMs can be useful to get a starting point or reference for research, or to massage text.

"Both sides" is not saying that both the people that underplay and those that overplay the abilities of LLMs are right. It's saying they are both wrong and that LLMs can have uses even if they are not general thinking machines.

Refusing opinions other than "LLMs are useless for everything" allows people to (rightfully) ignore actual good points about ethics in training.

@michael Gotta be honest and say that I do not see very often a discourse that says "LLMs are useless for everything". Like, ¿Does anyone actually said that? I definitely have seen the "fancy autocorrect" one, but.. it is kind of a tongue in cheek more than a scientific thesis about the full scope of those models.

The "two months until" is always very funny because they always try to explain why the next two months are so different to the previous amounts of two months

@berniethewordsmith Hi it's me I'm saying it. Or rather, it's not that they have literally *no* use, but of the few things they can do reliably with no major downsides, I'm not aware of any that a more primitive tool can't do as well or better.
That might be a matter of filter bubbles (even if not algorithmically on here)? I do see people that are so against LLMs, they argue it has no place at all.

I read your post like more or less doing that. If that is wrong, I apologize.

I am worried about you making AI into a political topic: the workers vs the employers, Thiel vs human beings. I think you have some good points and agree with many of them, but making AI political is a route to losing. As soon as it becomes political, people will defend the indefensible just because "the other side" is against it.

I find it's better to engage also with the people you disagree with. To take your example of games: I am not sure there is nothing to be gained from LLMs here. Sure, Thiel and the rest of the Epstein-class propose to use LLMs in place of writers, graphics artists, and developers. I agree with you that is dumb. But I think there is a place for LLMs in gaming. For example minor NPCs in a RPG: instead of giving them 10 voiced lines and a generic side-quest, a writer could give them a motivation, background, and a puzzle (some valuable information or quest you need to convince them to give you taking background and motivation into account), so the player can have more engaging conversations with them. That would also necessitate using a voice model and a voice actor could collaborate on having one created and licensed for limited use. Or an LLM could be used to make variations of side-quests.

The example is not perfect, could be part of a slippery slope, and has not been fully thought thru, but I think something like that from somebody that knows more about game production is more productive than saying "zero AI in games," and is a valid way of hearing both sides.

@michael @berniethewordsmith What are LLMs, then, if not merely next-token prediction? Obviously they're more sophisticated than a markov chain, but if they're more than prediction, *what* more?

Also you're flat-out wrong about artists. A great many of them hate AI enough that they won't use it on principle, and avoid tools that might sneak it into their workflow without their consent.