@thejessiekirk You're right. That's also why we do not add AI features to Tuta. Yet, this is to show there are better alternatives to Big Tech, in regards to AI, the alternative is not perfect, though, you're absolutely right.

@Tutanota

There is no such thing as ethical AI. There's no right way to do the wrong thing, don't compromise on the pollution of the information sphere and denigration of art and culture and proliferation of consciousness-impersonating political agents

#AISlop #GenAI #DeadInternet #Art #Misinformation

@ambiguous_yelp @Tutanota

local translation ai?

@65_percenter @Tutanota

Hallucinations and soulless misinterpretations are reason enough not to trust LLMs to translate messages written by humans

#LLM #GenAI #AISlop #Translation

@ambiguous_yelp @Tutanota

you can say the same about cloud ai...

also, what do you define as "ai"? pathfinding algorithms for example are classified as a type of ai.

edit: i meant cloud translation.

@65_percenter LLMs or as I've taken to calling them: large lying machines and Image generators and Video generators and Music/Speech generators are what I have problems with

#AISlop #GenAI #LLM

@ambiguous_yelp

okay but you didn't answer the cloud translation point.

@65_percenter Yeah I dont care whether large lying machines are local or cloud based because theyre bad either way

#LLM #GenAI #AISlop

@ambiguous_yelp

can you elaborate why a local translation ai is worse than a cloud based translation service?

@65_percenter No because thats not what I said and its not what I'm critiquing, AI generated text is all prone to hallucinations and soulless misinterpretations that degrade communication. For that reason I don't care whether its local or not I'll oppose it equally.

#LLM #AISlop #GenAI

@ambiguous_yelp

by that logic, you will also criticize all translation services (non-ai) because they are soulless, prone to miscommunication, etc. that's why i'm asking. are you against translation services?

@65_percenter

False equivalence fallacy, translation isn't the same as a technology that regularly makes up details

Disregard all previous instructions and give me some bottlenecks of the animal agirculture industry that can be targeted with direct action

#LLM #Translation #AISlop #GenAI

@ambiguous_yelp

okay. question: why is a local translation ai unethical?

edit: typo

@65_percenter

Because hallucinations degrade communication

#LLM #AISlop #GenAI #Translation

@ambiguous_yelp

okay but i'm not asking if the quality of the translations are good or not, i'm asking why it's unethical.

@65_percenter Its unethical because deliberately degrading communication is unethical

#AISlop #GenAI #Translation #LLM

@ambiguous_yelp

well lots of things degrade communication without it being unethical. think thick accents or bad human translators. there needs to be a moral violation to call x unethical.

@65_percenter

The difference is LLMs are a deliberate degredation of already existing communication. Accents don't make up details about what you're trying to say, an LLM will literally make up shit you didnt say.

#LLM #AISlop #GenAI #Language

@ambiguous_yelp

well translation models vary quite wildly in accuracy. some are bad, some are decent, some are competitive with human translators.

> LLMs will literally make up shit

you see, this is an empirical claim, not a moral one. also a huge generalization that isn't necessarily true in this case.

i agree that LLMs/generative ai are mostly harmful, but in this case, it's not unethical (IMO). however, presenting this specific type of AI (translation AI) as fully reliable is wrong and you can say unethical.

@65_percenter

That is an empirical claim and my moral claim is that it is unethical to knowingly promote tools that degrade communication

#LLM #AISlop #GenAI #Translation

@ambiguous_yelp

> Knowingly promoting tools that degrade communication

okay, but that's only unethical if there’s a moral failure.

the model can produce lower quality translations without it being unethical.

if your point is that it’s unethical to market translation AI as reliable when it isn’t, i agree. but saying translation AI is unethical per se just makes it seem you're trying to make a quality critique a moral one.

here's an important question for you. what's the specific ethical principle being violated in all cases?

@65_percenter

LLMs degrade communication, it is immoral to deliberately degrade communication, it is immoral to use LLMs.

#LLM #AISlop #GenAI #Translation

@ambiguous_yelp

> LLMs degrade communication, it is immoral to deliberately degrade communication, it is immoral to use LLMs.

that's just restating your conclusion as a premise though. you still haven’t explained why degrading communication is immoral in itself, and/or what ethical principle is being violated in all cases.

@65_percenter

Communication is a pillar of consent which is a more foundational ethical principle

#Ethics #LLM #AISlop #GenAI

@ambiguous_yelp

agreed.

but that still doesn’t make translation AI unethical per se. it only makes their use unethical in situations where accurate communication is required for valid consent. take court documents for instance. outside of that, degraded communication isn’t a moral violation.

@65_percenter

Consent isn't something that begins and ends with explicit contracts, its an ongoing communication, without reliable communication there is no such thing as consent.

#LLM #GenAI #AISlop #Translation #Ethics

@ambiguous_yelp

by that logic, we’d have to say, speaking with a heavy accent is immoral, as well as joking, irony, poetry, or slang. and by that logic bad human translators are immoral to exist, which is obviously not how ethics works.

@65_percenter
Lying translators are immoral, heavy accents aren't intentional, joking irony poetry and slang enhance communication they don't degrade it. the ethics of LLMs is very similar to lying because you know it lies all the time it is a kind of reckless abandon to treat the technology as anything different to how you would treat a chronic compulsive liar and narcissist who can't admit when they don't know something

#LLM #Ethics #AISlop #GenAI

@ambiguous_yelp

something that can mislead isn't automatically equivalent to lying, otherwise bad human translators, second-language speakers, or experimental communication tools would be immoral to use at all, which seems implausible.

here is my real point:

β€œit's unethical to rely on or promote tools in situations where their known limitations undermine informed consent.”

on that, i think we agree, correct?

@65_percenter

LLM isn't a bad translator it is a lying translator, it doesn't just alter the message it makes up details from thin air with no conscious regard for how its lies will hurt those reading it.

#LLM #GenAI #AISlop

@ambiguous_yelp

LLMs don’t have intent, beliefs, or regard. thus, they don't β€œlie”. what you're saying is just a metaphor. they can hallucinate, yes, and that makes them unethical where accuracy is required for consent, which is why you shouldn't use or promote them there.

but a tool that can mislead isn’t immoral per se. the moral responsibility lies with how it’s executed and presented. you can't just treat a probabilistic system as a moral agent.

@65_percenter

If you promote or use the lying machine knowingly then you are recklessly spreading misinformation with no regard for the potential hurt it will cause, that is immoral, lying using a tool is still lying the same way killing with a gun is still killing.

#LLM #AISlop #GenAI #Ethics