Sending someone LLM output in response to a question they ask is the intellectual equivalent of sending an unsolicited dick pic.

https://lemmy.world/post/42725128

Sending someone LLM output in response to a question they ask is the intellectual equivalent of sending an unsolicited dick pic. - Lemmy.World

Lemmy

I read recently in an article something that struck me as the heart of it and fits.

“Generative AI sabotages the proof-of-work function by introducing a category of texts that take more effort to read than they did to write. This dynamic creates an imbalance that’s common to bad etiquette: It asks other people to work harder so one person can work—or think, or care—less. My friend who tutors high-school students sends weekly progress updates to their parents; one parent replied with a 3,000-word email that included section headings, bolded his son’s name each time it appeared, and otherwise bore the hallmarks of ChatGPT. It almost certainly took seconds to generate but minutes to read.” - Dan Brooks

The Problem With Using AI in Your Personal Life - Lemmy Today

My friend recently attended a funeral, and midway through the eulogy, he became convinced that it had been written by AI. There was the telltale proliferation of abstract nouns, a surfeit of assertions that the deceased was “not just X—he was Y” coupled with a lack of concrete anecdotes, and more appearances of the word collaborate than you would expect from a rec-league hockey teammate. It was both too good, in terms of being grammatically correct, and not good enough, in terms of being particular. My friend had no definitive proof that he was listening to AI, but his position—and I agree with him—is that when you know, you know. His sense was that he had just heard a computer save a man from thinking about his dead friend. More and more, large language models are relieving people of the burden of reading and writing, in school and at work but also in group chats and email exchanges with friends. In many areas, guidelines are emerging: Schools are making policies on AI use by students, and courts are trying to settle the law about AI and intellectual property. In friendship and other interpersonal uses, however, AI is still the Wild West. We have tacit rules about which movies you wait to see with your roommate and who gets invited to the lake house, but we have yet to settle anything comparable regarding, for example, whether you should use ChatGPT to reply to somebody’s Christmas letter. That seems like an oversight. For the purposes of this discussion, I will define friendship adverbially, to mean any friendly communication—with boon companions but also family members, neighbors, and acquaintances—as well as those transactional relationships that call for an element of friendliness, such as with teachers and babysitters. There is reason to believe that use of AI in these friend-like relationships has already become widespread. In a Brookings Institution survey released [https://www.brookings.edu/articles/how-are-americans-using-ai-evidence-from-a-nationwide-survey/] in November, 57 percent of respondents said they used generative AI for personal purposes; 15 to 20 percent used it for “social media or communication.” [Read: The common friendship behavior that has become strangely fraught [https://www.theatlantic.com/family/2026/01/venting-complaining-advice/685529/]] Respondents to the Brookings survey were not asked whether they had offered some disclaimer about their use of AI or were passing off its outputs as their own; few statistics seem to exist on that question. But in a 2024 survey [https://www.microsoft.com/en-us/worklab/work-trend-index/ai-at-work-is-here-now-comes-the-hard-part] released by Microsoft, 52 percent of respondents who used AI at work said they were reluctant to admit using it for “important tasks,” presumably because it might make them look replaceable. My feeling is that using AI for friendly communications operates on a similar principle—but the share of people who should be ashamed is closer to 100 percent. Deception is only part of the problem; the main evil is efficiency. The people selling AI keep suggesting I use it to streamline tasks that I regard as fun and even meaningful. Apple’s iOS 26, for example, has made text messages more efficient by offering AI summaries of their contents in notifications and lists. Before I turned it off [https://support.apple.com/guide/iphone/use-apple-intelligence-in-messages-iph64709c5c3/ios#%3A%7E%3Atext=Turn+message+summaries+on+or%2CSummarize+Messages+on+or+off.], this feature summarized a group chat—in which my friend sent a picture of the door to her spooky attic, normally locked but now ajar, that became the occasion for various jokes about her finally being haunted—as “a conversation about a wooden room.” In addition to being inaccurate, this summary removed everything entertaining about the chat in order to reduce it to a bare exchange of information. Presumably the summary would have been more actionable if the conversation it summarized had focused on dates and times or specific work products instead of jokes, which are notoriously hard for AI to parse. But how many conversations with friends are about communicating facts? When my brother texts “How’s it going?,” he’s not seeking information so much as connection. That connection is thwarted if I ask ChatGPT to draft a 50-word reply about how his baby is cute and I love him. To prevent hard-core get-it-done types from inflicting slop on the rest of us, we need to agree that my sending you material written by ChatGPT is insulting, the same way you would be insulted if I were to play a recording of myself saying “Oh, that’s interesting” every time you spoke. The assumption that the main purpose of writing is to convey information quickly breaks down when you consider cases beyond signage and certain airport-oriented areas of publishing. In schoolwork for teachers, chats with friends, or even emails to business associates—relationships that are defined by mutual obligations—a primary function of any written text is, to borrow a phrase from cryptocurrency, proof of work. This work is the means by which the text was produced but also an end in itself, either because it benefits the writer or because it demonstrates commitment to the reader. [Read: The decline of etiquette and the rise of ‘boundaries’ [https://www.theatlantic.com/family/archive/2022/11/people-oversharing-tmi-friendship-boundaries/671970/]] Generative AI sabotages the proof-of-work function by introducing a category of texts that take more effort to read than they did to write. This dynamic creates an imbalance that’s common to bad etiquette: It asks other people to work harder so one person can work—or think, or care—less. My friend who tutors high-school students sends weekly progress updates to their parents; one parent replied with a 3,000-word email that included section headings, bolded his son’s name each time it appeared, and otherwise bore the hallmarks of ChatGPT. It almost certainly took seconds to generate but minutes to read. As breaches of etiquette go, where this asymmetric email falls is hard to say; I would put it somewhere between telling a pointless story about your childhood and using your phone’s speaker on an airplane. The message it sent, though, was clear: My friend’s client wanted the relational benefits of a substantial reply but didn’t care enough to write one himself. Writing is an act of taking care. College students write term papers not to inform their professors of the role of class in Wuthering Heights, but because putting what they have learned into words clarifies their understanding to both their instructors and themselves. Writing a eulogy both leads the eulogizer to think deeply about his relationship with the deceased and demonstrates his ongoing commitment to that relationship, even and especially after he can derive no benefit from it: Our goalie is dead, but we care enough to keep thinking about him even after he will stop no earthly puck. A time-saving technology such as AI is appealing in the workplace because many people want to spend less time working. This calculus should not apply to our friendly relationships, which are not purely means to money or status but also ends in themselves—experiences of other people that are worthwhile as experiences and therefore diminished by efficiency. I don’t want these relations to become more efficient for the same reason I don’t want a robot that pets the dog for me. And if you don’t want to text me, then why do you want to be my friend? Sometimes, of course, friendship is a pain. It would be easier to conduct friendship purely on our own terms, responding when we felt the urge and letting a computer talk to our friends when we didn’t want to. But that would not be friendship. A computer takes no care. We should not let it take the experience of caring away from us. — From The Atlantic [https://www.theatlantic.com/] via this RSS feed [https://www.theatlantic.com/feed/all/]

That’s something I’ve attempted to say more than once but never formulated this well.

Every time I search for something tech-related, I have to spend a considerable amount of energy just trying to figure out whether I’m looking at a well written technical document or a crap resembling it. It’s especially hard when I’m very new to the topic.

Paradoxically, AI slop made me actually read the official documentation much more, as it’s now easier than to do this AI-checking. And also personal blogs, where it’s usually clearly visible they are someone’s beloved little digital garden.

Funny how people who’s job it is to write can sometimes write gooder than us common folk.
funny for the writer elite maybe >:(

That’s something I’ve attempted to say more than once but never formulated this well.

Did you try ChatGPT?

Damn. Nailed it.
I had this “shower” thought when chatting with a friend and getting an obviously LLM-generated answer to a grammar question I had (needless to say the LLM answer misunderstood the nuance of my question just as much as the friend did before). Thank you for linking the article, I will share that with my friend to explain my strong reaction (“please never ever do that again”)
AI and someone who uses AI missed nuance? This is my surprised face. (- _ -⁠)

Question: why does the linked lemmy.today “[email protected]” show up here on lemmy.world (lemmy.world/c/[email protected]), but there are zero posts visible in the community? I mean - since you commented from lemmy.today, we are clearly federated? I am confused - I wanted to comment on the article you linked with a question, but I can’t find it via lemmy.world :(

Edit: Mhh… it seems I could send a federation request specifically for that community. I have done that, I hope someone will respond to it.

The Atlantic - Lemmy.World

Since 1857, The Atlantic has been challenging assumptions and pursuing truth. Don’t post archive.is [http://archive.is] links or full text of articles, you will receive a temp ban.

Federation sometimes has a few quirks. Seems like you figures it out though
Yeah, it’s working now :) This was the first time I experienced having to subscribe to be able to see posts from a community. Still weird, but if I assume correctly that this works like the Usenet, if I unsubscribe again, now that the community is federated properly, the posts should remain visible to everyone @lemmy.world?
That’s my understanding but I’ve not played with it too much
Let me go ask AI and copy the response below for you.

The most annoying part - the recipients email client probably offered to summarise with an LLM. My bot makes slop for your bot to interpret.

Its the most inefficient form of communication ever devised. Please decompress my prompt 1000x so the recipient can compress it back to my prompt.

I will say though, even a chatgpt email tells you a lot about the sender.

The question I ask is “How do you justify saving your time at expense of others’ time?”

Haven’t heard a good answer, just mumbling “it can be set to be less verbose…”

Thank you for this great answer! It’s something I intuitively felt but couldn’t put my finger on with the same surgical precision you just did.

Receiving LLM output as an answer to a question, is the equivalent of getting a voice reply to the question:

“Quick question, are you free on Saturday afternoon?”

Downloading audio message… Duration: 45 seconds
I absolutely cannot stand the kind of people who answer a brief and simple yes or no question with a wall of text or a two minute voice note. If it’s that complicated, because your pet chihuahua just had a stroke and you then fell in love head over heels with the veterinarian and that you’re currently at the airport to fly away for your spontaneous honeymoon, just say no and tell me about the details in person.

If I got that question by text though I'd normally ignore it until Monday.
Bumming around doing nothing is one of my most valued hobbies.
"are you bored ?" might get a response, but better to reveal something about the proposed alternative. "want to do macrame on Sunday?"

I especially hate this one in work.
"you free?"
Unfortunately, a polite reply is expected in that context so i can't say "no" (I'm at work, as you fucking well know).

The question normally means " i fucked up x and don't know what to do about it"

If they don't tell me what "X" is, how do I know where their fuckup ranks in the wider population of fuckery.

I respond to the “are you free?” With “No, I charge $200/hr”
Somehow, people don’t get that if we ask something to them, it’s because we want their personal interpretation of it, otherwise, we would use the internet as well
Specifically this - in terms of learning a language, understanding some nuances also absolutely requires an explanation by a native speaker that has a really good grasp of their language AND a talent of explaining. Both of which are criteria diametrically opposed to the average slop training data.

Specifically if you don’t even specify its ai, like I don’t mind using it, but be upfront that you don’t know and consulted an AI.

Like I see it happening at my work, people just straight copy pasting from copilot or w/e and it’s clear to me that’s what it is (especially if its discussing things I know that person has never heard of before lol)

I am slowly switching to increasingly less diplomatic reactions when I feel someone is using slop to respond to me or produce any kind of work text. Eventually I’ll probably advance to offensive reactions à la “Are you so f*cking incompetent that you can’t do better than copy-pasting into a glorified word prediction software?”
I definitely use it at work to “corporate” my emails or descriptions for things because my way of speaking would be frowned upon lmao. Literally “corpo this sentence please” or something along those lines.
Well, it’s common courtesy that if someone is asking you, assume they already asked google or whatever and think you might have the answer they can’t find.

That, and for some questions (i.e. nuances), a personal opinion is much more relevant to the asker than some random slop explanation. In this case I wanted to know which word construct in Turkish comes closes to the English “[ so and so ] is [ whatever ], isn’t it?” vs. “[ so and so ] is not [ whatever ], is it?” - Because Turkish has “isn’t it?” (değil mi? = not so?) but it doesn’t have “is it?”, mostly because “to be” is used much different in the language.

A google result wouldn’t help me at all - the pure grammar answer is “there’s no form of ‘is it’ to be coupled with a negative assumption/assertion”. But does a language construct exist to transport the nuance of “the speaker assumes that something is NOT [soandso], and wants to ask confirmation” vs. the speaker assuming that something IS [soandso], and asking for confirmation.

I still don’t know the answer, but it appears this nuance can’t be expressed in Turkish without describing around it in a longer sentence.

But I have my phone’s texting set permanently to respond with AI so I never have to talk to anyone.
I mean I don’t care if they use it like a search engine to remind themselves about the topic if they had some knowledge on it before they looked it up and if they put some cognitive power to go over the answer and absorb it and respond in their own words. But yeah a cut and paste or if they know nothing about it and parrot off what the llm tells them. Thats annoying.
while it doesn’t affect me directly if people use it “like a search engine”, it still empowers the tech bro billionaires who are the worst of the worst of scum of mankind, and it fucks up democracy, environment and hardware prices. So I’d rather everyone just boycotted this BS.
doesn’t using a search engine do the same? empower the tech bro. do you expect people not to use search engines because man. that is just not going to happen.
Not in the same way. People are more cautious (on average) with what info they give away there, plus pre-LLM search engines were unable to contextualize a user’s search history. Now though - yes people should boycott the big engines. Becomes easier, too, with AI slop rendering them near useless.
I mean on one hand, it’s a shower thought. On the other, this is a really dumb shower thought.
I needed that reminder. It doesn't matter how stupid a showerthought is.
I often use AI to break up my ADHD mono-sentence paragraph. I’ll stream of consciousness my reply then tell it to not change my wording but break up the excessively long sentences, and to reorder and split things into paragraphs that follow well. I’m still doing the writing, but having an advanced spell check is actually super useful.
Pretty sure my boss did this to me today.
At least a dick can be useful to create life… an LLM can never become life
I think I’d prefer an unsolicited dick pick.

I don’t quite get the equivalence there. I’d say an LLM response is more on par with responding with a link to lmgtfy.com or something.

The intellectual equivalent of sending someone a dick pic would be a cold contact with LLM-generated text promoting or pushing something that you didn’t otherwise show interest in. Or like, that friend from highschool who messages you out of the blue and you realize after a few messages that they’re trying to sell you their MLM garbage.

I don’t quite get the equivalence there.

It’s garbage insulting your intellect and personal relationship with the sender. Whereas an unsolicited dick pic is garbage insulting your eyes and personal relationship with the sender.

They’re both garbage, sure, but I wouldn’t call it an equivalent. Especially in severity–one is insulting, the other is sexual harassment.

The key word is “unsolicited.” An LLM response to a question you ask is garbage, but it’s solicited garbage. Like asking someone in Home Depot where the hammers are, and having them take 10 minutes for them to look it up on their phone. It’s a stupid response, but it was solicited. It’s at least a lazy attempt to respond relevantly, however insulting.

Or just sending the link to chatgpt.

“Don’t ask me, just ask chatgpt! What am I, your boss or something?!”

my boss does this all the time. I just ignore it.
Sending SOMEONE ELSE’S dick pick at that.
there’s that, too…
Sending a shitty AI representation of a dick pic.
If I wanted to ask chatgpt I would have asked it myself 
No love for LLMs from me but, flatly, no. Asking a question is soliciting a response. Their response is not the one you wanted, but it is solicited. It would be like you asking for a dick pic from someone, the penis of whom you were interested in seeing, and them responding with a generated image from one of the unfiltered image generators.
The intellectual equivalent to an unsolicited dick pic is probably spam advertising. A piece of media is being sent to someone who did not request it, by someone who does not care if the recipient does not want to receive it.
Totally agree. It’s no where near the level of a dick pic - a dick pic is sexual harassment.
We’ve gone into this in detail in the other threads. If you send someone LLM output, your a shitty friend/colleague/whatever.
And yet still in no way equivalent to a dick pic. Equivalence here is “raspberriesareyummy doesnt like that” which doesn’t exactly meet muster, even for a shower thought.
Reply: tell ChatGPT I said thanks.
Read AI output, check the sources to confirm it’s true. Reply in your own words.
That’s the polite variant, but it still involves the use of LLM, and the assumption that machine learning is AI (it’s not, despite what the tech bros tell you). People using LLMs should be treated like people who pick their nose and eat their boogers at the dinner table. :p
Where’s Draconic_MEO when you asked for it?
It might mean you’ve asked a trivial/routine question you easily could have answered yourself. In the same way someone might just send you a Google response prior to chatgpt.