My friend seems genuinely baffled that I am an AI researcher who refuses to use AI! Not only that, but I argue against it from theory, not experience. Why don't I just give it a try for a while, and see what it's really about before I judge it?

I guess I see where he's coming from. Part of the problem is the word "AI." LLMs are not my research focus, so it's less of a contradiction than it sounds. But I admit, being a non-user makes my arguments against LLMs less credible.

I just don't understand why I owe it to anybody to give AI a shot. I know how LLMs work in gory detail, and I don't trust them. I've seen the mediocre work they produce. I've read studies about the seductive illusion of competence and caring they create, and how people fall for that. I know it's all built on an incredibly exploitative business model.

I feel entirely justified in not giving them a chance. I guess I'm just as baffled by how badly he wants me to try it, and how sure he seems to be that it would change my mind.

@ngaylinn

It will only make him more annoyed if after you do (not saying you should) you are still not impressed.

I have tried it.

In good faith too. Let me see if I can make this work. Can it save me any time?

But the time savings are an illusion. I tried it for feedback on a poem and it was very flattering but then on reflection that felt hollow and I didn't want to trust the advice since... well I didn't write the poem for machine validation I write a poem to impress my friends.

@futurebird Indeed. I'm also confused that he feels it's been such an improvement in his life! In our exchange, he let an LLM edit one of his replies. Not only did I spot it immediately, but it was noticeably worse than the rest of our dialog. "Hollow" is a good word for it.

There's no convincing him. And I wasn't trying to convince him, which is the weirdest part. He just got defensive and started trying to convince me...

@ngaylinn

People are scared that people look down on them for leaning on this tech, he may have been using it much more in ways you don't know about.

I think that there is an impulse to hide the use kind of says everything. The shame isn't coming from other people, it's ones own self-respect saying that this is lazy, inconsiderate, hollow.

AI use guilt is real. And justified.

@ngaylinn

I know of some one who asks AI things like "my co-worker asked me to cover for them, how do I say no without making them mad?"

It might be OK advice, but by using it one is avoiding the friction and danger of human interactions.

I get it. Social interactions are terrifying. But, if I use a machine to help me never make a "mistake" will anyone even know who I am anymore? I am an annoying person. That would be lost.

@ngaylinn

I guess that's why doing that for an email to a boss seems less offensive than doing it for an email to a friend.

That said, I don't even really think the tech is very good at this type of thing. You will always need to proof-read and edit, and edit and edit.

By the time you are done it's the same process.

Unless you close your eyes and hit "send" and hope no one notices you sound like a creepy overly friendly creature from an advertisement. And some people are doing this.

@futurebird
My wife is incredibly much more productive if she starts with something on the page and can edit it. Almost all her scientific papers started out with me writing some bullshit about the topic and her then writing an entire paper about how wrong I was 😅 anyway now she outsources that to AI! AI took my job!
@ngaylinn

@dlakelan @ngaylinn

OMG I should ask my husband to write a paragraph about ants. It would cause a whole book.

@futurebird @ngaylinn

I'm here for the book. Kickstarter that puppy.

@dlakelan @futurebird @ngaylinn We don’t need AI, we need lorem ipsum generators.

In other news, I don’t need money, I just need money.

@dlakelan @futurebird @ngaylinn I love this technique and am totally up for it. (I cannot write when I'm faced with a blank screen with a blinking cursor)
@dlakelan @futurebird @ngaylinn You know, this is a great point though. I think LLMs can be useful for some tasks, one of them being just getting a kind of template to start from and do your own thing. I think some of the backlash against these models is absolutely justified, but we miss nuance here. There are many LLMs to pick from, and much of how they sound comes down to what training data they're based on. I've experimented with many of the models out there, and I think some of the most interesting things you can get are actually from local models that you train on your own. I see them as more of a fun toy than anything serious right now, and I don't think it's going to be the AI revolution many are hoping, but you never know. I wonder what's going to happen in the next 5 to 10 years.

@futurebird @ngaylinn

"That said, I don't even really think the tech is very good at this type of thing. You will always need to proof-read and edit, and edit and edit.

By the time you are done it's the same process."

Yep. A colleague of mine used an LLM to shorten an email I was going to send. I had to check it carefully and correct it in three places where it had missed my nuance and actually changed the meaning.

@futurebird @ngaylinn if you have, like me, typing speed close to talking speed, proofreading an email costs as much time as writing it myself in the first place. So prompting LLM to write it for me is all additional time lost.
No thanks, I will write my own mails.

@futurebird
> I am an annoying person. That would be lost.

🤣
@ngaylinn

@futurebird @ngaylinn

There's also a goodly chunk of evidence that LLMs pick up racist & other sick biases from the 'swamp' that forms their learning pool.

Why does your friend not feel wary about that 💩 tainting their work & interactions?

Asking honestly... I can't imagine why anyone committed to a better world thinks AI will help, given the data I've seen

@PeachMcD @futurebird @ngaylinn A Veritasium Youtube video about a month ago showed how modern life, in increasing the number of "distant friends" or strangers any of us regularly encounter (via travel, communications, social media, etc), has undermined effectiveness of previously dominant positive, altruistic, or "tit-for-tat" social strategies which rely on repeated encounters. Hence folks lean more towards selfish interaction, or simply don't care what others think, because it works for them.
Something Strange Happens When You Trace How Connected We Are

YouTube
@futurebird @ngaylinn If you let AI write it you wouldn't get the catharsis from writing your own cranky email to someone about a problem. I'm going to have to edit it for professionalism anyway, so I might as well type it all out and get to vent in the first draft.
@futurebird @ngaylinn I mean you might be less annoying. But I've a friend, I love her dearly, but she does tend to formal texts. I think it's from having worked in a call centre and I'm like I'm a friend not a customer, you don't need to talk to me like that. So it might make you more annoying.

@futurebird

@ngaylinn

I wish there was more gigantic-personal-vehicle use guilt.

Although a neighbour bought one, then sold his wife's small car and bought her an EV, so that's something, except she wasn't consulted.

@EricLawton @futurebird @ngaylinn Traded in one source of guilt for another. Now is ze time on Sprockets vhen ve learn to treat women with basic respect.

@ngaylinn @futurebird

The interesting and terrifying psychology here is that “use guilt” is followed by a campaign to get everyone else on the bandwagon instead of using better tools. This is a significant factor in why we have such a ridiculous and irrational collective action problem in so many areas, as the last decades amply demonstrate in so many areas. Never mind teaching children to “use tech” and “prompt AI”… how about teaching people some tools for self-reflection, emotional maturity, and personal growth?

@ngaylinn Peer pressure comes from fear, aren't the hypers trying to convince themselves as much as anyone else?

@futurebird @ngaylinn indeed.

I was told: "You aren't doing it right."
And: "It has gotten a lot better..."

OK perfect!
Please link me to the the SOP or "any documentation™" for this to use in my business setting context that has established data for use cases adjacent to our current model of doing things.

.....still waiting.

@thejikz @futurebird @ngaylinn I will make it easier! Send me instructions on how to configure it to be secure and avoid things like poorly managed credentials! Because as the person who has to fix how we manage credentials, I HAVE TO OPEN SAID CODE.
@thejikz I've yet to see One Singular Example of an AI success story. Lots of just-around-the-corner but never this-actually-went-well.
@futurebird @ngaylinn the hollow flattering has made me habitually deliberately torvalds my way through every human interaction even harder than i used to

@ngaylinn I guess one way to get the point across is to compare it to pharmaceutical drugs: just because one might study opioids for painkiller purposes doesn't mean that one feels the need to inject heroin directly into one's veins. Even if one might take Tylenol to deal with a migraine, there is a line one wouldn't cross.

And a pharmacist would be fully justified in refusing to take heroin, no matter how much someone else talks about how good it feels to use.

That *might* backfire, because LLMs are not necessarily heroin, but the general idea is that "I have heard heroin is bad - I do not necessarily need to do my own research using it to believe that to be true.".

@AT1ST @ngaylinn The difference between opiates and LLMs, is that there are use cases that fully justify the hazards of opiates. Best case, LLM use will only produce fluff in exchange for converting a lot of energy into heat.

A better analogy is studying the effects of mercury while refusing to drink down that vial of methyl mercury.

@su_liam @ngaylinn The main thing I was thinking of is that there is a distinction between opioids; even if you *might* use a weaker opioid for a test, you might avoid fentanyl...unless the pain tolerance is necessary to justify it

For a while, doctors did overprescribe opioids, but have since tried to curtail it, because of the opioid epidemics; but often, someone talks about a new more powerful opioid. You do not need to *try* them to know that a more powerful opioid can be a problem.

@ngaylinn THIS!
THEM: “Such & such is the best mallet hammer ever!”
ME: “happy for you on your behalf but I don’t need nor have use for one.”
THEM: “But, you have to try it. It will change your life!”
ME: “happy for you on your behalf but I don’t need nor have use for one.”
THEM: “But, you have to try it. You will get left behind (FOMO move in action).
ME: “happy for you on your behalf but I don’t need nor have use for one.”
THEM: “But, you have to try it.”

No why, reason or benefit given.

@dahukanna @ngaylinn
Back in the day the phrase was "misery loves company".

@ngaylinn
I kind of worry that I will get sucked into it if I am forced to use it for work etc.

maybe that's not justified, but by seeing how badly people want me to "just try it", it sounds like an addiction i want to steer clear of.

@lilacperegrine Yeah. I dunno how realistic of a fear that is, which is kinda scary on its own! Hopefully knowing the risk and being a skeptic means you wouldn't, but some people certainly do...
@ngaylinn nobody is immune to propaganda
and llm stuff can be tailored to fit you so it’s worse
@lilacperegrine @ngaylinn that peer pressure 😓 let's remember no one is immune to propaganda. They are literally shoving IA down everyones throat.
When someone asks why I don't use IA, I just tell them "I don't need it, I'm smart enough"
So far no one has had a response for that one 🤣
I'm average btw but if my parents made it with out it, why can't I?
@Psyche30_ @lilacperegrine @ngaylinn It should be called artificial stupidity (AS). Then it could be upgraded to artificial subnormal stupidity (ASS).

@Biggreenjoe @Psyche30_ @ngaylinn

There was a shitpost going around calling it "Applied Statistics" for the same reason, but honestly "Applied Statistics" is a better name for what a lot of this is doing than a literal machine learning things

@lilacperegrine it feels not so much like an addiction (like Tiktok would be) but rather an easy way out, and like my braincells will die if i start using AI
@bookstardust @lilacperegrine it's both... sadly.
The addiction is similar to those in casinos and well if you don't use something it deteriorates (as in muscle) make it do it for you becomes not using your brain.
@lilacperegrine @ngaylinn i was forced to use ai for my programming job , i also was afraid of this . truth is , it obly made me hate it more , and want to stay even further from it in my free tine
@fiore @lilacperegrine @ngaylinn forcing devs to use AI is crazy tbh like Yeah you must use the shitty crap generator instead of doing things the way you know to do or your fired
@kimapr @lilacperegrine @ngaylinn its more like : heres this churn generator . u dont have to use it !! but every week my churn expectations from you are gonna be very high , and my quality expectations very low . but dw really !! its totally fine if you dont wanna use it !! imean i am spending 200€/mo for it which i could rlly havr just given to you in your paycheck , but does that even matter . dw really
make a chatgpt clone that just gives you your own code 🧌
CC: @[email protected] @[email protected] @[email protected]
@[email protected] If you'll indulge me, how about a word replace: "My friend seems genuinely baffled that I research lotteries but refuse to play the lottery! Not only that, but I argue against it from theory, not experience. Why don't I just give it a try for a while, and see what I really win before I judge it?"
@abucci Good one. I've been struggling to find just such an example. But, yeah. How can you judge the poison without taking it for yourself?! Er... rather easily, actually.
@ngaylinn @abucci "How can you say vehicle collisions are bad if you've never crashed one? Bones heal back stronger."

@ngaylinn There's so many things that people never try because of understanding first principles, or just a basic acknowledgement of how obviously bad something is. None of these people have "given a chance" to cutting off their own arms, eating gravel, or trying to jump the Grand Canyon in a Dodge Caravan.

I've periodically tried my work's AI, and the only thing it's "good" for is making me so mad at its results that I pound out what I need myself in half the usual time. Every time one of my coworkers gives me AI generated code, the code has serious basic flaws (like, "that library you're relying on doesn't exist" or "the response to that function is completely unrelated to what you're using it for").

@ngaylinn "the best way to quit smoking is to never start smoking"

When I explain why I'm not a botlicker, I do it from a perspective of marketing, that shit was made to be adictive.
With neuromarketing things are made to benefit the company, not the user. If the human has a reflex they can't avoid then it's used against them for profit.
Ask someone to quit smoking, they could. If possible then everyone can argue it's the users choice... right?

@Psyche30_ Absolutely! These things cost a fortune to train and run, but the companies just give the public free access. Why would they do that? Because it's a trap.

I don't care if the bait is tasty.

@Psyche30_ Oh hells, that's a take. And suddenly I see a very ugly side to the bubble popping.

A whole lot of people are going to abruptly lose access to their addiction bots.

@ngaylinn

@solitha @ngaylinn I've never once smoked in my life and even at times I just feel like I need to smoke, even as a teen... 🤣
I'm guessing maybe from watching other people smoke when the need relief.
And I mean when you study marketing you learn about how cigars became mainstream it was a HUGE marketing campain.
Future's children may have to study the gaslighting/ dopamine hit/dependence behind AI.
@ngaylinn I've gotten the same argument from gun nuts. "How can you say you don't like guns if you've never fired one?" Because if I fired one and liked it, that would make it WORSE. There's all kinds of bad stuff humans do that we like to do--because we're predators. We are animals and we are predators, and if we steal the labor of others or build the propaganda/war machine and enjoy doing it, our enjoyment doesn't make it the right thing to do. You don't have to try it to know that it's wrong
@smutmag I've fired guns, and liked it. I do not wish to do so again - no allure. Noisy fun yes, hunting tool yes. But I don't believe everybody needs one for defense, that's garbage.

@ngaylinn

People who are tech researchers (like my partner) are the ones most against the stupid use of tech. The more you know HOW it works, the less you want to apply it to your own life. We have dumb everything in this house.

We don't owe it to anyone to give the bullshit machines a try for ourselves. We know HOW they work. We don't need to see it work on us.

@chu @ngaylinn Always keep a gun next to your printer. This way, if it starts to make a weird noise, just shoot it. 

@chu @ngaylinn also applicable to Fox "News."

There are people who watch it that think they're smart enough to not fall for the brainwash effects.

@mwt @ngaylinn

Lol. Between us our PhDs are in computer science and communications.

I was just granted a degree this year for arguing that media killed us during COVID.

Definitely no RW news in this house.