How do I explain my reluctance to use generative AI in good faith?

https://sopuli.xyz/post/37189620

How do I explain my reluctance to use generative AI in good faith? - Sopuli

I want to let people know why I’m strictly against using AI in everything I do without sounding like an ‘AI vegan’, especially in front of those who are genuinely ready to listen and follow the same. Any sources I try to find to cite regarding my viewpoint are either mild enough to be considered AI generated themselves or filled with extremist views of the author. I want to explain the situation in an objective manner that is simple to understand and also alarming enough for them to take action.

If it’s real life, just talk to them.

If it’s online, especially here on lemmy, there’s a lot of AI brain rotted people who are just going to copy/paste your comments into a chatbot and you’re wasting time.

They also tend to follow you around.

They’ve lost so much of their brains to so, that even valid criticism of AI feel like personal insults to them.

I paste people's AI questions into a chatbot for the humor of it.

They’ve lost so much of their brains to AI, that even valid criticism of AI feel like personal insults to them.

That’s the issue. I do wish to warn me or even just inform them of what using AI recklessly could lead to.

Why care?

You’re wanting to go out and argue with people and try to use logic when that part of their brain has literally atrophied.

It’s not going to accomplish anything, and likely just drive them deeper into AI.

Plenty of people that need help actually want it, put your energy towards that if you want to help people.

The post is aimed at me facing situations where I state among people I know that I don’t use AI, followed by them asking why not. Instead of driving them out by stating “Just because” or get into jargons that are completely unbeknownst to them, I wish to properly inform them why I have made this decision and why they should too.

I am also able to identify people to whom there’s no point discussing this. I’m not asking to convince them too.

I wish to properly inform them why I have made this decision and why they should too.

You’re asking how to verbalize why you don’t like AI, but you won’t say why you don’t like AI…

Let’s see if this helps, imagine someone asks you:

I don’t like pizza, how do I tell people the reasons why I don’t like pizza?

How the absolute fuck would you know how to explain it when you don’t know why they don’t like pizza?

You do have a point. I think I may be overthinking this after all. I’ll just try to talk with them about this upfront.

Why care?

To give some fucks, probably.

Yup, that’s dbzer0

They've lost so much of their brains to AI, that even valid criticism of AI feel like personal insults to them.

More likely they feel insulted by people saying how "brain-rotted" they are.

What would the inoffensive way of phrasing it be?

Genuinely every single pro-AI person I’ve spoken with both irl and online has been clearly struggling cognitively. It’s like 10x worse than the effects of basic social media addiction. People also appear to actively change for the worse if they get conned into adopting it. Brain rot is apparently a symptom of AI use as literally as tooth rot is a symptom of smoking.

Speaking of smoking and vaping, on top of being bad for you objectively, it’s lame and gross. Now that that narrative is firmly established we have actually started seeing youth nicotine use decline rapidly again, just like it was before vaping became a thing

What would the inoffensive way of phrasing it be?

...and then you proceed to spend the next two paragraphs continuing to rant about how mentally deficient you think AI users are.

Not that, for starters.

The lung capacity of smokers is deficient, yes? Is the mere fact offensive? Should we just not talk about how someone struggling to breathe as they walk up stairs is the direct result of their smoking?
This is literally begging the question.
I don’t think it is, nor do I think name dropping random fallacies without engaging with the topic makes for particularly good conversation. If you have issues with OP’s phrasing it would benefit all of us moving forward if we found a better way to talk about it, yes?
It's not a random fallacy, it's the one you're engaging in. Look it up. Your analogy presupposes an answer to the question that is actually at hand. It's the classic "have you stopped beating your wife" situation.

I am intimately familiar with the fallacy. You don’t know how to apply it. I have presupposed nothing.

You can see very clearly from the structure of my post that the brain rot I am referring to is established via anecdote. It is my direct experience. This is obviously low quality evidence by itself for the establishment of my conclusion as a broader fact, and we could absolutely go down that road and start linking to the actual cognitive decline studies if you wanted

But my ‘argument’ is simply not structured as a begging the question fallacy. I am literally saying that I have personally observed that all AI users I encounter are “wife beaters”, and am proceeding with my analogy from there

“Given that we have identified a group of wife beaters, and you dislike the term ‘wife beater’, how can we better phrase it to improve domestic abuse interventions?” Does not become a begging the question fallacy just because you disagree with the initial classification of who is a wife beater

I have presupposed nothing.

You wrote:

The lung capacity of smokers is deficient, yes? Is the mere fact offensive? Should we just not talk about how someone struggling to breathe as they walk up stairs is the direct result of their smoking?

By using this analogy for the "brain rot" you claim comes from AI use, you are presupposing that it actually happens. You're putting as much confidence in that as there is in the well-established but completely unrelated effect of smoking on lung capacity.

Ultimately, what this whole exchange boils down to:

OP: How do I tell people I don't use AI without insulting them?

You: Tell them I think they're stupid.

How useful.

You are factually incorrect, willfully ignoring my point, and you don’t even appear to know who you’re talking to, confusing me with an above poster in this conversation.

Your misattribution of a specific fallacy as well as your refusal to engage in the actual topic will endure as a mark of shame against you, and I will add you as yet another example in the list of pro-AI outcomes I have observed. Cheers

What about that exchange makes you think they are pro AI? They seemed to be open minded to learning more about the topic but for some reason nothing was resolved.

To be honest with you, at the time I literally just got the vibe they were pro AI based on their defensiveness as well as their evident inability to participate in basic conversation, which is a hallmark of AI induced enfeeblement. I went with my gut, in other words

Ah, and my gut was correct. A quick look through their posting history just from this week reveals they use AI and are looking forward to its further inclusion in Firefox. Half their comments generally are defending AI tech giants, including minimizing the environmental and privacy concerns.

My 2c on a different topic - open minded people don’t try to discredit you on a technicality while actively shoehorning you into it and ignoring your actual words. I don’t detect the faintest hint of willingness to learn, either. We’re talking about the same person?

Looking through the interaction again, perhaps you are right and I was reading into it too much. They were stuck trying to get you to admit brain rot isn’t a forgone conclusion and wouldn’t accept that you already answered it noting this was your experience. I do want to add to one of their points. If you start with a premise that AI causes brain rot and you are generally hostile/aggressive in pushing that view, I would imagine it becomes a sort of self fulfilling prophecy that you will only have negative interactions with brain rotted individuals.

I think “brain rot” is because most people are lazy. YouTube/TikTok/TV “causes” brain rot in the same way. If people want to turn off their brain and fill it with mush, it will happen regardless. Counterpoint - I reference videos on YouTube fairly often for helping me fix something or learning to play an instrument.

AI use is probably the biggest threat to what I am calling “lazy” people because it is interactive, “addictive”, and the sycophantic direction it’s taking just can’t be healthy, but I’m not so sure people will come to depend on it any more than other technologies. I’m sure you saw the news of AI contributing toward suicides, but as a counterpoint, organizing knowledge for me to make decisions is one of the things I use it for. It gets in the way and tries to steer me in the wrong direction sometimes, but overall it is useful in non-sycophantic interactions (e.g., agentic tool use). The honeymoon phase of conversational AI has been over for me for a while. Hopefully I keep an immunity to bullshit like YouTube, social media and AI (yet to be seen and I’m sure you’ll set me straight :) ) and whatever comes next, and I’ll try not to demonize the new thing either.

Signed, A brain rotted individual

I agree, and actually I noted the similar effects of social media on people’s minds in general.

Right like at the most fundamental level, the main issue lies not in the inherent nature of a tool but in how it’s applied. Just as you noted with video content, you can either rot your mind with shorts or prune your algorithm to do amazing things for you like help you learn an instrument.

When you view tools merely as an input/output system like this, the nature of the tool itself is not relevant. There would be utterly no difference in this case between performing a standard web query, or having an llm collate links for you.

Given this, the question then becomes, “well, is it actually possible for AI be used in an equivalently responsible manner?”

My contention is that it is not, and the people using it for these purposes (including yourself) are incorrect about the nature of the output they think they are achieving. For example, it’s been established that AI use worsens worker productivity in general. Their numbers literally get worse, and we can also see the truth of these studies manifest in the sweeping failure of every company everywhere to realize any financial benefit from the adoption of AI tools.

The crazy thing is that these very same people will often incorrectly report that their productivity has in fact improved. Really think about for a minute. Their numbers are worse as a matter of material fact, but they believe they are working more efficiently than ever before, sometimes by ridiculous margins of 50% or more.

With that in mind, now consider what may be happening to you if you rely on AI for immeasurable things. If you rely on it to organize information, for example, with the goal of becoming well informed and making good decisions. You claim to know when it’s leading you astray, and can course correct, but…are you sure that merely being sure about that is sufficient to protect you? (Since fallacies are on the mind today, check out the toupee fallacy).

To me, demonization has nothing to do with it. When a new drug comes to market I am skeptical. As information comes to light I accept or reject it based on that information. This process is what helps us differentiate between beneficial new drugs (like Ozempic is turning out to be) or complete scams like a recent workout pill that shall remain unnamed that, despite heavy marketing, ultimately does nothing besides causing liver failure down the road. Of note: despite being proven to do nothing, there are countless anecdotes of people trying it who reported amazing additional gains in the gym.

Just be careful out there, is all I’m saying. To be honest you don’t appear particularly brain rotted to me at the moment. Hopefully this admission absolves me somewhat of the aura of self fulfilling prophecy in that regard. My hostility in general is not directed at a particular “flag” (such as AI use, political affiliation, consumer habits, and so on) but at dishonesty and the absence of integrity when discussing them. If we sacrifice these things, we have no protection whatsoever from those who seek to scam us, as they can trivially exploit us using whatever ground we conceded

Whether it is a net negative or a force multiplier, it is certainly making work a bit more fun for me, so I’ll take my better attitude with more engagement on my part over the multiple burnouts I’ve had throughout my career. Relying on it is probably not possible in its current capacity as it’s still a fancy bullshit generator, so it’s hard to rely too much on something that doesn’t work. It’s like saying don’t overly rely on your work laptop. Well without some access to internal systems and records, I wouldn’t be very useful at my job. I see AI as eventually filling a probably niche role. I guess time will tell.

In a way aren’t you asking “how can I be an AI vegan, without sounding like an AI vegan”?

It’s OK to be an AI vegan if that’s what you want. :)

The fuck is an AI vegan? There isn’t meat and AI isn’t food.
Your bed isn’t really made for a king or queen.
Oh great the bots are hallucinating.

They’re saying you’re taking things too literally and not thinking about the potential meaning of the sentence.

There is a belief that a lot of Vegans basically preach to others and look down on people who still consume meat. Their use of AI Vegan was meant to utilize that background and apply it to AI, so they don’t want to come off as someone preaching or being a snob about their issues with AI.

The fuck it's not.
I get the impression his bed was made for twins.
Ah ok. You might be new to language? There’s this thing called analogy
Definition of ANALOGY

Definition of 'analogy' by Merriam-Webster

Oh hey, language is supposed make ideas easier to transmit. The term is fucking clunky, using AI is not akin to diet.

Communicate clearer.

OP came up with the analogy. I understood quite well and caught up with easily. Well done OP!

It seems to mean people who don’t consume AI content not use AI tools.

My hypothesis is it’s a term coined by pro-AI people to make AI-skeptics sound bad. Vegans are one of the most hated groups of people, so associating people who don’t use AI with them is a huge win for pro-ai forces.

Side note: do-gooder derogation ( en.wikipedia.org/wiki/Do-gooder_derogation ) is one of the saddest moves you can pull. If you find yourself lashing out at someone because they’re doing something good (eg: biking instead of driving, abstaining from meat) please reevaluate. Sit with your feelings if you have to.

Do-gooder derogation - Wikipedia

You say “pro-AI” like there’s a group of random people needing to convince others to use the tools.

The general public tried them, and they’re using them pretty frequently now. Nobody is forcing people to use ChatGPT to figure out their Christmas shopping, but something like 40% of people have already or are planning on using it for that purpose this year. That’s from a recent poll by Leger.

If they weren’t at the very least perceived as adding value, people wouldn’t be using them.

I can say with 100% certainty that there are things I have used AI for that have saved me time and money.

The Anti-AI crowd may as well be the same people that were Anti-Internet 25 years ago.

Of course people are using AI. It’s the default behavior of Google, the most popular web search. It confidently spits out falsehoods. This is not an improvement.

And there are definitely people “needing to convince others to use the tools.”. Microsoft and Google et al are made of people. They’re running ads to get people to adopt it.

Buying stuff online and email are useful stuff in ways LLMs can only dream of. It is a technology nowhere near as good as its hype.

Furthermore , “the general public likes it” is a dubious metric for quality. People like all sorts of garbage. Heroin has its fans. I’m sure it’d have even more if it was free and highly advertised. Is that enough to prove it’s good? No. Other factors such as harm and accuracy matter, too.

It’s called a euphemism. We all know that a vegan is someone who does not use animal products (e.g. meat, eggs, dairy, leather, etc). By using AI in front of the term vegan, OP intimates that they do not use AI products.

I suspect you’re smart enough to know this, but for some reason you’re being willfully obtuse.

~Then again, maybe not. 🤷‍♂️~

Baseless slur made up by corporate-pushed mainstream media to normalize giving time and money to the AI companies that paid for their airtime

Stop trying to make AI Vegan work. It’s never going to stick. AFAIK this term is less than a week old and snuggly expecting everyone to have already assimilated it is bad enough, but it’s a shit descriptor that is trading in right leaning hatred of ‘woke’ and vegans are just a scape goat to you.

Explain how AI haters or doubters cross over with Veganism at all as a comparison?

Like veganism, abstaining from AI is arguably better for the environment.
That’s not just true of those two things though. I’m looking for a tie that binds them together while excluding other terms. If it’s an analogy what is the analogy?

Explain how AI haters or doubters cross over with Veganism at all as a comparison?

They’re both taking a moral stance regarding their consumption despite large swathes of society considering these choices to be morally neutral or even good. I’ve been vegan for almost a decade and dislike AI, and while I don’t think being anti-AI is quite as ostracizing as being vegan, the comparison definitely seems reasonable to me. The behaviour of rabid meat eaters and fervent AI supporters are also quite similar.

But there are other arguments against ai besides consumption of resources. The front facing LLMs are just the pitch. The police state is becoming more oppressive using AI tracking and identification. The military using AI to remote control drones and weapon systems is downright distopian. It feels like they’re trying to flatten the arguments against AI into only an environmental issue, making it easier to dismiss especially among the population that doesn’t give a shit about the environment.
The way the term is being used here though is to refer to vegans as preachy and annoying; it’s not a pro-vegan term. It’s just not a nice term to use as it ostracizes and belittles people fighting for rights.
This is the first time I've encountered the term and I understood it immediately.
Congratulations? Does that make it universal? Dude was being a prick when someone didn’t know what it mean.
For me this was the first time hearing it. And it made immediate perfect sense what OP meant. A pretty good analogy!

What are some good reasons why AI is bad?

There are legitimate reasons people worry about AI. Here are some of the strongest, clearly framed concerns:

1. Bias and unfair decisions

AI systems often learn from biased data and can unintentionally discriminate—against certain races, genders, ages, or socioeconomic groups—in hiring, lending, housing, policing, and more.

2. Lack of transparency

Many AI models act as “black boxes,” making decisions that are hard to explain. This creates problems when the stakes are high (medical diagnosis, legal decisions, etc.).

3. Privacy risks

AI can analyze huge amounts of personal data, track behavior, or identify people through facial recognition—often without explicit consent.

4. Job displacement

Automation threatens certain categories of work, particularly routine or repetitive jobs. Without proper planning, this can increase inequality and unemployment.

5. Misinformation and deepfakes

AI makes it easier to create convincing fake audio, video, or text. This can undermine trust in media, fuel propaganda, and destabilize democratic processes.

6. Weaponization

AI can be used in autonomous weapons, cyberattacks, targeted surveillance, or manipulation—raising serious security and ethical issues.

7. Overreliance and loss of human skills

As AI does more tasks, people may become too dependent, reducing critical thinking, creativity, or expertise in certain fields.

8. Concentration of power

Powerful AI tools tend to be controlled by a few big companies or governments, potentially leading to monopolies, inequality, and reduced individual autonomy.

9. Alignment and control risks

Advanced AI systems may behave in unexpected or harmful ways if their goals aren’t perfectly aligned with human values—even without malicious intent.

10. Environmental impact

Training large AI models consumes significant energy and resources, contributing to carbon emissions.

If you want, I can also provide reasons why AI is good, help you construct an argument for a debate, or analyze specific risks more deeply.

Were you looking for this kind of reply? If you can’t express why you have an opinion maybe your opinion is not well founded in the first place. (Not saying it’s wrong, just that it might not be justified/objective.)

Please, for the love of god, tell me you didn’t write that post with AI, because it really looks like that was written with AI.

Except the first phrase and last paragraph, it was AI. Honestly, it feels like OP is taunting us with such a vague question. We don’t even know why they dislike AI.

I’m not an AI lover. It has its place and it’s a genuine step forward. Less than what most proponents think it’s worth, more than what detractors do.

I only use it myself for documentation on the framework I program in, and it’s reasonably good for that, letting me extract more info quicker than reading through it. Otherwise haven’t used it much.

My question was genuine. I haven’t been an avid user of generative AI when it was first released and decided against using it at all lately. I tried to use it in niche projects and was completely unreliable. Its tone of speech is bland and the way it acts as a friend feels disturbing to me. Plus the environmental destruction it is causing on such a large scale is honestly depressing to me.

All that being said, it is not easy for me to communicate these points clearly to someone the way I have experienced it. It’s like the case for informing people about privacy; casual users aren’t inherently aware of the consequences of using this tool and consider it a godsend. It will be difficult for them to convince that the tool they cherish to use so much is not that great after all, thus I am asking here what the beat approach should be.

I haven’t been an avid user of generative AI when it was first released and decided against using it at all lately. I tried to use it in niche projects and was completely unreliable. Its tone of speech is bland and the way it acts as a friend feels disturbing to me. Plus the environmental destruction it is causing on such a large scale is honestly depressing to me.

Isn’t that exactly the answer you are looking for?

The "environmental destruction" angle is likely to cause trouble because it's objectively debatable, and often presented in overblown or deceptive ways.
“Good catch! I did make that up. I haven’t been able to parse your framework documentation yet”
You beat me to it. To make it less obvious, I ask the AI to be concise, and I manually replace the emdashes with hyphens.

I haven't tested it, but I saw an article a little while back that you can add "don't use emdashes" to ChatGPT's custom instructions and it'll leave them out from the beginning.

It's kind of ridiculous that a perfectly ordinary punctuation mark has been given such stigma, but whatever, it's an easy fix.

The most reasonable explanation I’ve heard/read is that generative AI is based on stealing content from human creators. Just don’t use the word “slop” and you’ll be good.
Except that is also a subjective and emotionally-charged argument.
What is your viewpoint?
Mine, for example, is that not only I don’t need it at all but it doesn’t offer anything of value to me so I can’t think of any use for it.