RE: https://mastodon.social/@Gargron/1
10 years today.
| Personal blog | https://www.optimiced.com/en/ |
RE: https://mastodon.social/@Gargron/1
10 years today.
I enjoyed @sboots’s essay on becoming a “generative AI vegetarian”, and for a few reasons: it’s a great read, first and foremost, but also my GOODNESS is this well-sourced. Sean pulls together many, many threads here; always grateful for a map like this.
@benedictc @Gargron imagine the cost of the subscription if all of those companies worked with real money and had to turn a profit from the start.
Imagine that they had to pay real copyright fees for all the content used in training the models.
Imagine that any of the illegal uses of the training data and the people that died using their products had meaningful consequences in court.
Imagine that they had to pay the full tax, the full price of the services that they use.
I'm writing this in English.
Not because English is my first language—it isn't. I'm writing this in English because if I wrote it in Korean, the people I'm addressing would run it through an outdated translator, misread it, and respond to something I never said. The responsibility for that mistranslation would fall on me. It always does.
This is the thing Eugen Rochko's post misses, despite its good intentions.
@Gargron argues that LLMs are no substitute for human translators, and that people who think otherwise don't actually rely on translation. He's right about some of this. A machine-translated novel is not the same as one rendered by a skilled human translator. But the argument rests on a premise that only makes sense from a certain position: that translation is primarily about quality, about the aesthetic experience of reading literature in another language.
For many of us, translation is first about access.
The professional translation market doesn't scale to cover everything. It never has. What gets translated—and into which languages—follows the logic of cultural hegemony. Works from dominant Western languages flow outward, translated into everything. Works from East Asian languages trickle in, selectively, slowly, on someone else's schedule. The asymmetry isn't incidental; it's structural.
@Gargron notes, fairly, that machine translation existed decades before LLMs. But this is only half the story, and which half matters depends entirely on which languages you're talking about. European language pairs were reasonably serviceable with older tools. Korean–English, Japanese–English, Chinese–English? Genuinely usable translation for these pairs arrived with the LLM era. Treating “machine translation” as a monolithic technology with a uniform history erases the experience of everyone whose language sits far from the Indo-European center.
There's also something uncomfortable in the framing of the button-press thought experiment: “I would erase LLMs even if it took machine translation with it.” For someone whose language has always been peripheral, that button looks very different. It's not an abstract philosophical position; it's a statement about whose access to information is expendable.
I want to be clear: none of this is an argument that LLMs are good, or that the harms @Gargron describes aren't real. They are. But a critique of AI doesn't become more universal by ignoring whose languages have always been on the margins. If anything, a serious critique of AI's political economy should be more attentive to those asymmetries, not less.
The fact that I'm writing this in English, carefully, so it won't be misread—that's not incidental to my argument. That is my argument.
When I started in security, one of the prevailing attitudes was "The weakest link in the chain will always be the human."
I would like to thank every LLM provider and startup for changing this paradigm by introducing a much weaker link in the chain.