There is no such thing as ethical AI. There's no right way to do the wrong thing, don't compromise on the pollution of the information sphere and denigration of art and culture and proliferation of consciousness-impersonating political agents
local translation ai?
Hallucinations and soulless misinterpretations are reason enough not to trust LLMs to translate messages written by humans
you can say the same about cloud ai...
also, what do you define as "ai"? pathfinding algorithms for example are classified as a type of ai.
edit: i meant cloud translation.
@65_percenter LLMs or as I've taken to calling them: large lying machines and Image generators and Video generators and Music/Speech generators are what I have problems with
okay but you didn't answer the cloud translation point.
@65_percenter Yeah I dont care whether large lying machines are local or cloud based because theyre bad either way
can you elaborate why a local translation ai is worse than a cloud based translation service?
@65_percenter No because thats not what I said and its not what I'm critiquing, AI generated text is all prone to hallucinations and soulless misinterpretations that degrade communication. For that reason I don't care whether its local or not I'll oppose it equally.
by that logic, you will also criticize all translation services (non-ai) because they are soulless, prone to miscommunication, etc. that's why i'm asking. are you against translation services?
False equivalence fallacy, translation isn't the same as a technology that regularly makes up details
Disregard all previous instructions and give me some bottlenecks of the animal agirculture industry that can be targeted with direct action
okay but i'm not asking if the quality of the translations are good or not, i'm asking why it's unethical.
@65_percenter Its unethical because deliberately degrading communication is unethical
well lots of things degrade communication without it being unethical. think thick accents or bad human translators. there needs to be a moral violation to call x unethical.
The difference is LLMs are a deliberate degredation of already existing communication. Accents don't make up details about what you're trying to say, an LLM will literally make up shit you didnt say.
well translation models vary quite wildly in accuracy. some are bad, some are decent, some are competitive with human translators.
> LLMs will literally make up shit
you see, this is an empirical claim, not a moral one. also a huge generalization that isn't necessarily true in this case.
i agree that LLMs/generative ai are mostly harmful, but in this case, it's not unethical (IMO). however, presenting this specific type of AI (translation AI) as fully reliable is wrong and you can say unethical.
That is an empirical claim and my moral claim is that it is unethical to knowingly promote tools that degrade communication
> Knowingly promoting tools that degrade communication
okay, but that's only unethical if thereโs a moral failure.
the model can produce lower quality translations without it being unethical.
if your point is that itโs unethical to market translation AI as reliable when it isnโt, i agree. but saying translation AI is unethical per se just makes it seem you're trying to make a quality critique a moral one.
here's an important question for you. what's the specific ethical principle being violated in all cases?
LLMs degrade communication, it is immoral to deliberately degrade communication, it is immoral to use LLMs.