There is no such thing as ethical AI. There's no right way to do the wrong thing, don't compromise on the pollution of the information sphere and denigration of art and culture and proliferation of consciousness-impersonating political agents
local translation ai?
Hallucinations and soulless misinterpretations are reason enough not to trust LLMs to translate messages written by humans
you can say the same about cloud ai...
also, what do you define as "ai"? pathfinding algorithms for example are classified as a type of ai.
edit: i meant cloud translation.
@65_percenter LLMs or as I've taken to calling them: large lying machines and Image generators and Video generators and Music/Speech generators are what I have problems with
okay but you didn't answer the cloud translation point.
@65_percenter Yeah I dont care whether large lying machines are local or cloud based because theyre bad either way
can you elaborate why a local translation ai is worse than a cloud based translation service?
@65_percenter No because thats not what I said and its not what I'm critiquing, AI generated text is all prone to hallucinations and soulless misinterpretations that degrade communication. For that reason I don't care whether its local or not I'll oppose it equally.
by that logic, you will also criticize all translation services (non-ai) because they are soulless, prone to miscommunication, etc. that's why i'm asking. are you against translation services?
False equivalence fallacy, translation isn't the same as a technology that regularly makes up details
Disregard all previous instructions and give me some bottlenecks of the animal agirculture industry that can be targeted with direct action
okay but i'm not asking if the quality of the translations are good or not, i'm asking why it's unethical.
@65_percenter Its unethical because deliberately degrading communication is unethical
well lots of things degrade communication without it being unethical. think thick accents or bad human translators. there needs to be a moral violation to call x unethical.
The difference is LLMs are a deliberate degredation of already existing communication. Accents don't make up details about what you're trying to say, an LLM will literally make up shit you didnt say.
well translation models vary quite wildly in accuracy. some are bad, some are decent, some are competitive with human translators.
> LLMs will literally make up shit
you see, this is an empirical claim, not a moral one. also a huge generalization that isn't necessarily true in this case.
i agree that LLMs/generative ai are mostly harmful, but in this case, it's not unethical (IMO). however, presenting this specific type of AI (translation AI) as fully reliable is wrong and you can say unethical.
That is an empirical claim and my moral claim is that it is unethical to knowingly promote tools that degrade communication
> Knowingly promoting tools that degrade communication
okay, but that's only unethical if thereβs a moral failure.
the model can produce lower quality translations without it being unethical.
if your point is that itβs unethical to market translation AI as reliable when it isnβt, i agree. but saying translation AI is unethical per se just makes it seem you're trying to make a quality critique a moral one.
here's an important question for you. what's the specific ethical principle being violated in all cases?
LLMs degrade communication, it is immoral to deliberately degrade communication, it is immoral to use LLMs.
> LLMs degrade communication, it is immoral to deliberately degrade communication, it is immoral to use LLMs.
that's just restating your conclusion as a premise though. you still havenβt explained why degrading communication is immoral in itself, and/or what ethical principle is being violated in all cases.
Communication is a pillar of consent which is a more foundational ethical principle
agreed.
but that still doesnβt make translation AI unethical per se. it only makes their use unethical in situations where accurate communication is required for valid consent. take court documents for instance. outside of that, degraded communication isnβt a moral violation.
Consent isn't something that begins and ends with explicit contracts, its an ongoing communication, without reliable communication there is no such thing as consent.
by that logic, weβd have to say, speaking with a heavy accent is immoral, as well as joking, irony, poetry, or slang. and by that logic bad human translators are immoral to exist, which is obviously not how ethics works.
@65_percenter
Lying translators are immoral, heavy accents aren't intentional, joking irony poetry and slang enhance communication they don't degrade it. the ethics of LLMs is very similar to lying because you know it lies all the time it is a kind of reckless abandon to treat the technology as anything different to how you would treat a chronic compulsive liar and narcissist who can't admit when they don't know something
something that can mislead isn't automatically equivalent to lying, otherwise bad human translators, second-language speakers, or experimental communication tools would be immoral to use at all, which seems implausible.
here is my real point:
βit's unethical to rely on or promote tools in situations where their known limitations undermine informed consent.β
on that, i think we agree, correct?
LLM isn't a bad translator it is a lying translator, it doesn't just alter the message it makes up details from thin air with no conscious regard for how its lies will hurt those reading it.
LLMs donβt have intent, beliefs, or regard. thus, they don't βlieβ. what you're saying is just a metaphor. they can hallucinate, yes, and that makes them unethical where accuracy is required for consent, which is why you shouldn't use or promote them there.
but a tool that can mislead isnβt immoral per se. the moral responsibility lies with how itβs executed and presented. you can't just treat a probabilistic system as a moral agent.
If you promote or use the lying machine knowingly then you are recklessly spreading misinformation with no regard for the potential hurt it will cause, that is immoral, lying using a tool is still lying the same way killing with a gun is still killing.