RE: https://oldbytes.space/@gloriouscow/116224004520766154

There's a larger issue here here, and that is that it's trendy in certain spaces to be extreme and opinionated about your beliefs, and angry at anyone who doesn't share them. I see this a lot on Bluesky and Mastodon.

The problem is, this is a slippery slope towards ending up in a tiny bubble and losing many of your friends. And that doesn't lead to happiness or to good mental health. Not for you, and not for the people around you.

The two biggest topics I see this with lately is AI and trans discourse. The simple fact is, morality isn't absolute. Words don't have absolute meanings. Tools aren't absolutely evil or absolutely moral.

It's okay to be sad at the state of the world. I'm sad too! And it's okay to be angry at problem people (think, the billionaire class). But when you direct that anger at your peers, just because they don't share the exact moral compass you have, you're just hurting them and hurting yourself.

It's impossible to live in a world where your social circle is fully aligned with you on beliefs and morals. It just isn't. It's okay to be disappointed. But if you start cutting people off for it, you aren't making anything better.

(cont'd)

In the extreme, you misfire, and misfire so badly you just end up isolating yourself. I've seen this happen multiple times recently, with people who saw what they perceived to be a moral slight or failing by someone they knew, and came out with knives swinging. Those never end well. At best you get blocked by one person. At worst you get called out for being completely unreasonable, and end up losing badly.

For better or worse, we need to work together. You can prioritize those closer to your ideals. It's okay to modulate your relationships based on how aligned you are on values. But that's the key word, modulation. If you care about someone and they make a moral choice that you dislike, the two healthy things to do are to either have a conversation about it (in private!), or just do nothing, accept the disappointment, and move on.

At the end of the day, outrage and anger might get you clicks, boosts, and a feeling of satisfaction... but are you really helping? Are you really making your life better in the long term? Others'?

I'm going to use an older example on purpose. Some time back, a friend expressed that she wanted to play and stream the wizard game, and I DMed her. I tried to explain why that would be so hurtful. And I convinced her not to. And I think that was a lot more productive than piling onto people who play the wizard game on the internet.

It took 12 minutes from making that post to the first reply accusing me of victim blaming and gaslighting, and a further 5 minutes for that conversation, where I attempted to be pragmatic and polite, to escalate to ad hominem (at which point I blocked them).

Edit: And looking through the thread and replies, there were multiple errors of judgement made by this person, including:

  • Taking my position as a direct affront on them (when I have no idea who they are and obviously am not referencing any specific situation they are part of)
  • Making assumptions about my background
  • Interpreting my usage of impersonal you as personal (despite being in the context of a hypothetical, which I used since I was not even clear on the stance of this person, as their replies were one-liners with no context).
  • Taking my suggestion to consider therapy (again, in an impersonal hypothetical context) as a direct accusation of being mentally unstable (almost everyone can benefit from therapy!)

And this is why coming out knives swinging is not going to make you any friends or convince anyone.

@lina It feels like some people take their purity tests way too seriously, IMO. I understand that the way things are headed strong pushback is needed, but attacking / cutting off people who you're >75% in agreement with is not it.
@ddg Yup. And it's actually entirely counterproductive, because this (which can be summarized as "leftist infighting") only serves to distract from the major problems we can all agree about. It makes us weaker, not stronger.

@lina Absolutely. It's not a new problem, either, but those people fail to learn from the left's past messaging / gatekeeping mistakes.

It's ridiculous how most of the things we defend are popular with the general population, yet that infighting prevents us from becoming strong enough to promote actual change.

@ddg @lina It's just so utterly stupid. I have been watching leftists burn bridges from their moral highground for decades, yet the world is leaning more to the right than it ever has, so it's clearly not working.

Unless of course your goal isn't to make the world better/more leaning towards the left/socialness, but to be a "thought leader" in a tiny little petri dish no one else cares about.

@ddg @lina in fact i think its valid/interesting/useful to be friends with people with whom you are in 10% agreement with
@m @lina As long as you can handle the 90% you disagree with, absolutely! Echo chambers don't do us much good, indeed. But there are some disagreements that just aren't compatible with being friends with someone.
@lina I've been noticing more and more people thinking cynicism and a lack of charitability is a virtuous way to look at things. if you only expect the worst out of people, you'll only get the worst out of them

@lina not sure which part the saw about gaslighting  

At my day job I've seen a lot whay you say. I'm one of the few that don't use AI's or have gemini/chatgpt open all day and I'm the weird one...

I used to point out the problems of this kind of tools but, with time, my coworkers stopped commenting these topics with me. I've learned now to pick my fights: can't talk them out of using them but I can, for some specific use cases, manage to.

I've alse seen that, if you aren't always the "bad guy" ruining the fun, you can learn a thing or two and people trusts you more

@lina What need did you satisfy in posting this?
@lina Quite frankly, the "impersonal you" wasn't that clear in the exchange, so I can see how it could be read as targetted at the other person.

@patrick That's fair.

But that's kind of the issue with these situations, there's no room for error once the knives come out. The other person went out at me directly without any nuance or room for alternate interpretation. I should've stopped and reworded, but I'm only human, I was already bothered by the opening salvo and in my experience there's little room for a nuanced argument with people who act like that... so I think my brain just went into "get it over with" mode and unsurprisingly that didn't work great.

@lina This brings to mind the so called "4-Ears model of communication." If people get stuck in the so called "Appeal-Ear", whatever message they get, they interpret it as if they have been asked to do something, or change something on what they are doing about what is being talked about. The ear that gets emphasized the most is typically a listener's responsibility, and not a result of the exact sender's words in the message:
https://greator.com/en/4-ears-model/
4 Ears model: how does it work and where does it apply?

The 4 Ears Model is a communication model. It states that a statement can be interpreted in four different ways.

Greator

@lina Thank you for taking the time and making the effort to write this out and share it in this space. I absolutely agree with you.

From my own perspective, I think it is a constant battle to remind people that the "social" part of social media should encourage the use of social skills. The best parts of us can be leveraged here, even when the mechanisms of a social media platform allow and even reward hit and run tactics.

@lina idk this kinda comes across as gaslighting and victim blaming to me

@jackie Did you read the quoted post? The context is people using AI vs not using AI. If you feel so strongly about AI that you consider yourself a victim of everyone that uses AI (directly or indirectly), then... honestly, I don't know what to tell you, because even if you're justified in having that strong a reaction, at that point there is no healthy outcome for you in that state of mind. If that's genuinely where you're at, I think you might need a therapist.

Another case I saw recently is a trans person who believes 100% of cis people and 80% of trans people are transphobic. Similarly, regardless of how valid the underlying triggers may be... if that's where you're at, you need professional help to process your feelings. The world isn't going to change overnight to align better with you, and you are not going to be in a good place with that mindset.

@lina that's you lacking a systemic perspective on transphobia

@jackie I'm not the one getting banned from communities for lashing out at their peers. Again, you can argue about the situation as much as you want, but the practical reality is people with feelings so strong that they are perpetually angry at everyone around them are only hurting themselves.

You cannot exist in any community if you believe even within circles of your peers, 80% of them are toxic. It doesn't matter how right or wrong you are. You aren't going to convince everyone around you to change like that.

@lina you're a sicko
@lina instead of reconsidering at all, your response to the mere suggestion of gaslighting was to imply i am mentally unstable
@lina i'm assuming you have a computer science background to know about AI applications... but what education have you had about the sociology of race, class and gender? why would your perspective carry such weight over this hypothetical person, let alone me?

@lina Unlike imperfections within trans spaces being treated with an absolutist mindset where my experience has been patience with people who are already on board but stumbling with trans acceptance is effective, I fear taking the same approach with AI usage seems to be less effective.

I'm aware attempting to shame users not only results in misplaced anger than could have been directed at the actual source of our harms, but is also generally less effective at actually swaying someone towards a better way. AI usage feels like a nigh infinitely more slippery slope than someone struggling with their own transphobia. Only a single person I know who seemed to be considering it came to the conclusion that it was a detriment, and it's possible they were already coming to that conclusion anyway.

I hope either I'm wrong or that we find a more effective solution, because the sheer number of AI converts, many of them people I respect who were staunch opposers and now seem practically dependant on it to exist, almost scares me more than the rise of transphobia.

@disorderlyf It is scary! And I don't have a solution for that. But like, the solution also isn't to just cut everyone off either.

Honestly, it's quite analogous to drug usage. I know people who have become dependent on AI, and know that, and recognize they have de-skilled themselves. And so, I can only wait for that to run its course and for them to figure out a way forward. Once they have the information, only they can make that decision to stop, as difficult as it might be. Maybe one day we will have 12-step programs for AI use?

On the other hand, I know people who use AI responsibly, at least in terms of harm to themselves. Heck, just today I saw someone use AI in a way that made me go "yeah, I can't argue that that didn't make sense to do", and a couple days ago I saw an AI bot comment on an issue I created pointing out the exact line of code that needed changing to fix the issue (I'm tempted to just make the PR now, which I wouldn't have otherwise). Heck, even I have used AI productively by choice like, once (many times if you count machine translation which these days is LLMs behind the scenes). It doesn't fix the moral problems of the technology and the harm it can do in other situations... but that moral decision is ultimately one people need to make for themselves, I can only at best educate them about the impact.

@lina @disorderlyf i don't think there's enough info at the moment to determine if there is such a level of 'responsible' LLM use, i'd personlly guess from observation of people and the opinions of people more knowledgeable than me that it's like lead exposure, with having no safe level.

It so sucks that so many of the people who have fully bought into it are in positions where it affects millions rather than just themself

@Ember @disorderlyf I'm quite confident responsible LLM usage exists. Machine translation is one of those. It's really hard to argue that more accessible higher quality translation is a bad thing (and I say this as someone with friends who are professional translators).

I also have a project on my TODO list which will involve an LLM (SLM?) where I hope to be able to make ethical, safe usage of the technology for a particular use case (TL;DR local home automation assistant).

More generally, the examples I'm thinking of "responsible" usage (not considering moral implications of the tech/companies, just the output) essentially involve one-shot projects and tasks. Like, if the LLM can do X thing many times faster than you'd be able to do it yourself, and X isn't a thing worth paying someone for nor a thing that aligns with your own personal growth priorities, and the impact of hallucinations/imperfection is essentially inconsequential, it's quite reasonable to argue that outsourcing it to an LLM is responsible (to yourself at least).

But I think the implied rule #1 would be "never force it on anyone", and yeah, all the companies pushing for AI use are failing that rule.

Of course, there is a slippery slope, but I think that's something that can be countered with education (and perhaps legislation). Just like you can get addicted to OTC medication, but that doesn't mean OTC medication is bad. Alas, LLMs have been unleashed on the world really, really irresponsibly.

@Ember @disorderlyf Of course, it is possible that (high powered) LLMs are inherently "too addictive/dangerous" for them to be safe to make broadly available. That would mean that while some people can responsibly use LLMs, not everyone can, and could be an argument for restricting usage more heavily.

Just like it's obviously possible to give someone unrestricted access to morphine and it's entirely possible they'd avoid becoming addicted, but we obviously don't do that because there's a large risk they would.

TL;DR "we don't know if LLMs are like cannabis or like morphine yet".

@lina @disorderlyf not really sure that i'd consider LLMs a translation upgrade, personally i'd much rather a worse translation that can't make things up like LLMs can

to be clear, i consider someone using an LLM to generate code that others use as it affecting others, not just adding LLM 'features'

I fail to see how responsible use is possible when we don't know the full extent of the risks, and what we do know is pretty bad

As an aside, framing having boundaries and strong opinions as a slippery slope is really not great imo, particularly when those boundaries are strongly enforced because if any leeway is given it will be exploited

@Ember @disorderlyf Machine translation has practically always had the capacity to "hallucinate", it's always been imperfect because languages don't map perfectly. So I don't think that's much of a downgrade. Google Translate was doing Weird Stuff long before LLMs existed. Translation is fuzzy, which makes it a good match for fuzzy approaches using ML, which necessarily brings with it risk of logical/factual mistakes. Of course, to responsibly use the output means to understand that. Google Translate has caused many an unwarranted outrage due to a mistake, again long before LLMs.

As for generating code, I was mostly referring to people using LLM output themselves, not inflicting it on others. Doing the latter responsibly requires taking full responsibility for the code, which is something few people seem to be able to do (though I wouldn't say it's entirely impossible). Again, there are degrees here, like using an LLM for "small scale" autocomplete is quite likely to be harmless.

Actually, part of the harm maximization here is companies pushing huge LLMs vs. smaller scale local use cases.

I fail to see how responsible use is possible when we don't know the full extent of the risks, and what we do know is pretty bad

I mean... we don't know the full extent of the risks of the technology as a whole, sure, but it's not difficult to narrow down a particular use case and make a case for it being fairly harmless.

@lina @Ember Maybe it's different for other languages, but powering machine translation with LLMs doesn't seem to have made noticeable differences in the quality of translations I've gotten, and most of my use for machine translation is filling in gaps for Western European languages I already partially speak.

I don't know if that's what's happening here, but I think people are forgetting we had machine translation before the advent of LLMs.

I think you hit the nail on the head comparing it to drug use, but I wasn't sure if I was being dramatic when I thought that myself. Knowing someone else things so is reassuring, as is that you know people who are dependent who at least recognise the harm to themselves. Much like drug usage, I agree with you berating the person isn't going to help them. I acknowledge it's likely to reenforce it more than help it and a lot of the times I have done so has been fear of what it's doing to more people than I can count.

I don't think it's as beneficial while minimising harm as cannabis. Though, I'm trying to resist the urge to label it as dangerous as fentanyl.

@lina @Ember @disorderlyf My big issue with LLM translation is that the increase in how natural it sounds is much higher than the increase in actual translation ability. Which means that the new translations seem much better while only actually being a little better, and people trusting Google Translate too much was already a big issue. While LLMs' accuracy has gotten better, I'm pretty sure the perceived accuracy / actual accuracy ratio has gotten worse, which isn't great.

On the hallucination side, DeepL is famous for turning single words into entire dictionary definitions, and both it and Google Translate can translate things entirely wrong or just remove pieces of sentences when they get complicated, but ChatGPT is the first I've observed to read into the source text and make up its own fanfic-like additional sentences to embellish the original text. And while that might be closer to the options that a human translator has when translating, I don't really trust ChatGPT to know when it should and shouldn't be doing that. So yes, while Google Translate did hallucinate, I also think LLMs' hallucination abilities are enhanced compared to Google Translate, and not in the "less hallucinations" direction.

@TellowKrinkle @Ember @disorderlyf To clarify, I'm not saying I use ChatGPT to translate. Google Translate is known to use LLMs behind the scenes at least some of the time now. I don't know how the built in Grok translate on Twitter works but presumably it's the LLM wrapped in something. And AIUI DeepL is also relying on LLMs these days. I'm talking about those.

Presumably there's tuning/RLHF involved to make that usage not do what ChatGPT does.

@lina @TellowKrinkle @Ember afaict everything that was already using machine learning swapped to using LLMs for better or worse.
@lina @Ember @disorderlyf Ahh. A lot of people were using ChatGPT as a translator because you can actually give it context, which can help in many situations. But it also just... does weird things, probably due to not being tuned for translation.
@lina I think this might be a general problem with social media. There was this amazing talk by Daniel Kriesel where he said at the end that the most important social development in the 2010s was the "rise of the outraged". And it makes sense why. In general, humans like attention. And being polemic gets more clicks and replies than being solution-oriented. Social media is quite literally training people to have strong opinions. Let alone the group effect: "If people I respect lash out on the internet, maybe it’s not so bad when I do it as well."

@sigmasternchen Yeeeeeah. And it goes across groups and political leanings (remember GamerGate?).

Having lived through the past 15 years, and perhaps even myself contributed to it in my own niche circles in the past, I'm just so tired of it all. These days I distance myself from people who make their entire personality about outrage.

@lina I think I actually post fewer AI critical takes here because it's too easy to get engagement with that kind of content. It feels slimy to do it too often.
@dvshkn @lina One of the epiphanies I had relatively recently was the thing that annoyed me most is that it was essentially just clout chasing, but the people who post it probably object to "clout chasing" in the abstract because they define it as "people posting about popular things that I don't care about"

@lina I think half the problem really is that a lot of the Internet has got addicted to being angry at the state of the world, so they're actively looking for things to be angry about so they can portray that righteous anger.

This interacts *extremely poorly* with the fact that (as you say) on Bluesky and *ESPECIALLY* on Mastodon the "discourse" is *extremely* receptive to anti-AI stuff so "I hate AI, share if you agree!" stuff gets boosted more. There is a very narrow band of "acceptable" opinions on here and if you post stuff that validates them you get attention. Given how difficult it is to get attention on Mastodon to begin with, it's a vicious circle.

Honestly I echo what the post you quoted said, and it's part of why I reactivated my Bluesky, because at least I get the impression that people on Bluesky care about *something else in the entire world* other than tech sucking and the world being doomed.

@lina It just gets incredibly dull coming on here and seeing a feed that's just "I love open source!" and "AI is horrible, being creative is good!" Like cool, yeah, I don't like AI either, but they're obviously only posting that for clout because that shit's like catnip for Mastodon's natural user base, and Mastodon has basically *nothing outside* that natural user base.

@j0ebaldw1n Yup. Like, I posted about AI recently, but that's because I was replying to a specific situation I'm personally passionate about and I even ran a test personally to see the status quo. I don't go around posting "AI sucks" every chance I get or boosting that discourse. It's exhausting. And indeed as someone with friends who use AI, I value the health of my social circle more than that and I know cutting people off (or worse, putting them on public blast for it) is not going to do me any favors or meaningfully counter the proliferation of AI use.

I did get someone to switch from ChatGPT to Claude, because if they're going to use AI, I'd rather they use the one that doesn't have an increasingly publicly psychopathic CEO who took the first chance to become best buddies with the military industrial complex. Things aren't black and white.

@lina > implying dario isnt a dangerous psychopath

@fiore Look they all kind of are but there's degrees, you know?

Not going to claim I've done deep research or anything, I'm biased by what ends up on my feed... but at least the things Altman has been saying recently have been quite disgusting.

@lina he is honestly just quite a bit more visible . but like . comparing evil with eviler doesnt rlly make evil any better :P
@j0ebaldw1n @lina Posting something for clout on Mastodon seems like a misunderstanding of at least one of those things.
@Virginicus @lina I don't know what else you'd call posting things that you know will get a positive reception and lots of engagement, that there's no real purpose for posting other than to get a positive reception and lots of engagement.

@j0ebaldw1n @lina
But also, on Mastodon it really completely depends on who you follow. There's no mixing in of new people so if you follow mostly a certain kind of tech people you'll get a lot of the same opinions over and over.

I try following more of a mix, including a lot of hashtags (they bring in more different opinions and viewpoints).

@jannem @lina I know all this. But the discovery is still bad and the network is still very much awash in one specific kind of person (Linux user/nerd/open source/anti-AI/at least vaguely leftist). Which is far as fine as it goes but it just overwhelms everything else. And also, maybe sometimes I want to see pictures of birds - I do not want to follow hashtag-bird and then see every single picture of a bird everyone on Mastodon ever posts. It’s a poor alternative to an algorithmic feed or good discovery system.
@jannem @lina I’m not sure anyone at the Mastodon product team, or its various evangelists, have really ever understood (or even really cared to find out) why half the network just stopped posting the moment Bluesky invites started proliferating - or in some cases they seem to actively welcome it for weeding out the unbelievers. @kissane’s piece from 2023 still rings extremely true today: https://erinkissane.com/mastodon-is-easy-and-fun-except-when-it-isnt

@j0ebaldw1n @jannem @kissane Ngl Bluesky's "plug your own algorithm" thing is extremely cute and I wish it was a thing in Mastodon.

Like, I'm subbed to a feed there which apparently runs on someone's home server, and it consistently shows me stuff I really like (a good mix of relevant stuff from my follows and similar stuff from others).

@lina @j0ebaldw1n @kissane
I do agree. User-selectable (and user-implementable, at the instance level) algorithms would be welcome.

If nothing else, I'd like one that bubbles up posts and boosts by people I typically interact with over other posts.

@jannem @lina @kissane Realistically I think a *lot* of people want that but the project is so ideologically against anything perceived as “an algorithm”, and proposing them tends to get you so many people intent on ruining your life, that it will just never happen

Just think a lot of people fail to realise that network effects intrinsically mean that every person who bounces off or never joins your network, because it misses key features or is hard to use or is full of boring stuff etc, reduces its utility to *everyone* even if they don’t feel like it

@lina Mastodon users are creating the same echo chambers they criticize big social media platforms for, except the filtering is driven by user choice rather than algorithms. I don't know what's worse..