RE: https://oldbytes.space/@gloriouscow/116224004520766154

There's a larger issue here here, and that is that it's trendy in certain spaces to be extreme and opinionated about your beliefs, and angry at anyone who doesn't share them. I see this a lot on Bluesky and Mastodon.

The problem is, this is a slippery slope towards ending up in a tiny bubble and losing many of your friends. And that doesn't lead to happiness or to good mental health. Not for you, and not for the people around you.

The two biggest topics I see this with lately is AI and trans discourse. The simple fact is, morality isn't absolute. Words don't have absolute meanings. Tools aren't absolutely evil or absolutely moral.

It's okay to be sad at the state of the world. I'm sad too! And it's okay to be angry at problem people (think, the billionaire class). But when you direct that anger at your peers, just because they don't share the exact moral compass you have, you're just hurting them and hurting yourself.

It's impossible to live in a world where your social circle is fully aligned with you on beliefs and morals. It just isn't. It's okay to be disappointed. But if you start cutting people off for it, you aren't making anything better.

(cont'd)

In the extreme, you misfire, and misfire so badly you just end up isolating yourself. I've seen this happen multiple times recently, with people who saw what they perceived to be a moral slight or failing by someone they knew, and came out with knives swinging. Those never end well. At best you get blocked by one person. At worst you get called out for being completely unreasonable, and end up losing badly.

For better or worse, we need to work together. You can prioritize those closer to your ideals. It's okay to modulate your relationships based on how aligned you are on values. But that's the key word, modulation. If you care about someone and they make a moral choice that you dislike, the two healthy things to do are to either have a conversation about it (in private!), or just do nothing, accept the disappointment, and move on.

At the end of the day, outrage and anger might get you clicks, boosts, and a feeling of satisfaction... but are you really helping? Are you really making your life better in the long term? Others'?

I'm going to use an older example on purpose. Some time back, a friend expressed that she wanted to play and stream the wizard game, and I DMed her. I tried to explain why that would be so hurtful. And I convinced her not to. And I think that was a lot more productive than piling onto people who play the wizard game on the internet.

@lina Unlike imperfections within trans spaces being treated with an absolutist mindset where my experience has been patience with people who are already on board but stumbling with trans acceptance is effective, I fear taking the same approach with AI usage seems to be less effective.

I'm aware attempting to shame users not only results in misplaced anger than could have been directed at the actual source of our harms, but is also generally less effective at actually swaying someone towards a better way. AI usage feels like a nigh infinitely more slippery slope than someone struggling with their own transphobia. Only a single person I know who seemed to be considering it came to the conclusion that it was a detriment, and it's possible they were already coming to that conclusion anyway.

I hope either I'm wrong or that we find a more effective solution, because the sheer number of AI converts, many of them people I respect who were staunch opposers and now seem practically dependant on it to exist, almost scares me more than the rise of transphobia.

@disorderlyf It is scary! And I don't have a solution for that. But like, the solution also isn't to just cut everyone off either.

Honestly, it's quite analogous to drug usage. I know people who have become dependent on AI, and know that, and recognize they have de-skilled themselves. And so, I can only wait for that to run its course and for them to figure out a way forward. Once they have the information, only they can make that decision to stop, as difficult as it might be. Maybe one day we will have 12-step programs for AI use?

On the other hand, I know people who use AI responsibly, at least in terms of harm to themselves. Heck, just today I saw someone use AI in a way that made me go "yeah, I can't argue that that didn't make sense to do", and a couple days ago I saw an AI bot comment on an issue I created pointing out the exact line of code that needed changing to fix the issue (I'm tempted to just make the PR now, which I wouldn't have otherwise). Heck, even I have used AI productively by choice like, once (many times if you count machine translation which these days is LLMs behind the scenes). It doesn't fix the moral problems of the technology and the harm it can do in other situations... but that moral decision is ultimately one people need to make for themselves, I can only at best educate them about the impact.

@lina @disorderlyf i don't think there's enough info at the moment to determine if there is such a level of 'responsible' LLM use, i'd personlly guess from observation of people and the opinions of people more knowledgeable than me that it's like lead exposure, with having no safe level.

It so sucks that so many of the people who have fully bought into it are in positions where it affects millions rather than just themself

@Ember @disorderlyf I'm quite confident responsible LLM usage exists. Machine translation is one of those. It's really hard to argue that more accessible higher quality translation is a bad thing (and I say this as someone with friends who are professional translators).

I also have a project on my TODO list which will involve an LLM (SLM?) where I hope to be able to make ethical, safe usage of the technology for a particular use case (TL;DR local home automation assistant).

More generally, the examples I'm thinking of "responsible" usage (not considering moral implications of the tech/companies, just the output) essentially involve one-shot projects and tasks. Like, if the LLM can do X thing many times faster than you'd be able to do it yourself, and X isn't a thing worth paying someone for nor a thing that aligns with your own personal growth priorities, and the impact of hallucinations/imperfection is essentially inconsequential, it's quite reasonable to argue that outsourcing it to an LLM is responsible (to yourself at least).

But I think the implied rule #1 would be "never force it on anyone", and yeah, all the companies pushing for AI use are failing that rule.

Of course, there is a slippery slope, but I think that's something that can be countered with education (and perhaps legislation). Just like you can get addicted to OTC medication, but that doesn't mean OTC medication is bad. Alas, LLMs have been unleashed on the world really, really irresponsibly.

@lina @disorderlyf not really sure that i'd consider LLMs a translation upgrade, personally i'd much rather a worse translation that can't make things up like LLMs can

to be clear, i consider someone using an LLM to generate code that others use as it affecting others, not just adding LLM 'features'

I fail to see how responsible use is possible when we don't know the full extent of the risks, and what we do know is pretty bad

As an aside, framing having boundaries and strong opinions as a slippery slope is really not great imo, particularly when those boundaries are strongly enforced because if any leeway is given it will be exploited

@Ember @disorderlyf Machine translation has practically always had the capacity to "hallucinate", it's always been imperfect because languages don't map perfectly. So I don't think that's much of a downgrade. Google Translate was doing Weird Stuff long before LLMs existed. Translation is fuzzy, which makes it a good match for fuzzy approaches using ML, which necessarily brings with it risk of logical/factual mistakes. Of course, to responsibly use the output means to understand that. Google Translate has caused many an unwarranted outrage due to a mistake, again long before LLMs.

As for generating code, I was mostly referring to people using LLM output themselves, not inflicting it on others. Doing the latter responsibly requires taking full responsibility for the code, which is something few people seem to be able to do (though I wouldn't say it's entirely impossible). Again, there are degrees here, like using an LLM for "small scale" autocomplete is quite likely to be harmless.

Actually, part of the harm maximization here is companies pushing huge LLMs vs. smaller scale local use cases.

I fail to see how responsible use is possible when we don't know the full extent of the risks, and what we do know is pretty bad

I mean... we don't know the full extent of the risks of the technology as a whole, sure, but it's not difficult to narrow down a particular use case and make a case for it being fairly harmless.

@lina @Ember Maybe it's different for other languages, but powering machine translation with LLMs doesn't seem to have made noticeable differences in the quality of translations I've gotten, and most of my use for machine translation is filling in gaps for Western European languages I already partially speak.

I don't know if that's what's happening here, but I think people are forgetting we had machine translation before the advent of LLMs.

I think you hit the nail on the head comparing it to drug use, but I wasn't sure if I was being dramatic when I thought that myself. Knowing someone else things so is reassuring, as is that you know people who are dependent who at least recognise the harm to themselves. Much like drug usage, I agree with you berating the person isn't going to help them. I acknowledge it's likely to reenforce it more than help it and a lot of the times I have done so has been fear of what it's doing to more people than I can count.

I don't think it's as beneficial while minimising harm as cannabis. Though, I'm trying to resist the urge to label it as dangerous as fentanyl.

@lina @Ember @disorderlyf My big issue with LLM translation is that the increase in how natural it sounds is much higher than the increase in actual translation ability. Which means that the new translations seem much better while only actually being a little better, and people trusting Google Translate too much was already a big issue. While LLMs' accuracy has gotten better, I'm pretty sure the perceived accuracy / actual accuracy ratio has gotten worse, which isn't great.

On the hallucination side, DeepL is famous for turning single words into entire dictionary definitions, and both it and Google Translate can translate things entirely wrong or just remove pieces of sentences when they get complicated, but ChatGPT is the first I've observed to read into the source text and make up its own fanfic-like additional sentences to embellish the original text. And while that might be closer to the options that a human translator has when translating, I don't really trust ChatGPT to know when it should and shouldn't be doing that. So yes, while Google Translate did hallucinate, I also think LLMs' hallucination abilities are enhanced compared to Google Translate, and not in the "less hallucinations" direction.

@TellowKrinkle @Ember @disorderlyf To clarify, I'm not saying I use ChatGPT to translate. Google Translate is known to use LLMs behind the scenes at least some of the time now. I don't know how the built in Grok translate on Twitter works but presumably it's the LLM wrapped in something. And AIUI DeepL is also relying on LLMs these days. I'm talking about those.

Presumably there's tuning/RLHF involved to make that usage not do what ChatGPT does.

@lina @TellowKrinkle @Ember afaict everything that was already using machine learning swapped to using LLMs for better or worse.
@lina @Ember @disorderlyf Ahh. A lot of people were using ChatGPT as a translator because you can actually give it context, which can help in many situations. But it also just... does weird things, probably due to not being tuned for translation.