I'm writing this in English.

Not because English is my first language—it isn't. I'm writing this in English because if I wrote it in Korean, the people I'm addressing would run it through an outdated translator, misread it, and respond to something I never said. The responsibility for that mistranslation would fall on me. It always does.

This is the thing Eugen Rochko's post misses, despite its good intentions.

@Gargron argues that LLMs are no substitute for human translators, and that people who think otherwise don't actually rely on translation. He's right about some of this. A machine-translated novel is not the same as one rendered by a skilled human translator. But the argument rests on a premise that only makes sense from a certain position: that translation is primarily about quality, about the aesthetic experience of reading literature in another language.

For many of us, translation is first about access.

The professional translation market doesn't scale to cover everything. It never has. What gets translated—and into which languages—follows the logic of cultural hegemony. Works from dominant Western languages flow outward, translated into everything. Works from East Asian languages trickle in, selectively, slowly, on someone else's schedule. The asymmetry isn't incidental; it's structural.

@Gargron notes, fairly, that machine translation existed decades before LLMs. But this is only half the story, and which half matters depends entirely on which languages you're talking about. European language pairs were reasonably serviceable with older tools. Korean–English, Japanese–English, Chinese–English? Genuinely usable translation for these pairs arrived with the LLM era. Treating “machine translation” as a monolithic technology with a uniform history erases the experience of everyone whose language sits far from the Indo-European center.

There's also something uncomfortable in the framing of the button-press thought experiment: “I would erase LLMs even if it took machine translation with it.” For someone whose language has always been peripheral, that button looks very different. It's not an abstract philosophical position; it's a statement about whose access to information is expendable.

I want to be clear: none of this is an argument that LLMs are good, or that the harms @Gargron describes aren't real. They are. But a critique of AI doesn't become more universal by ignoring whose languages have always been on the margins. If anything, a serious critique of AI's political economy should be more attentive to those asymmetries, not less.

The fact that I'm writing this in English, carefully, so it won't be misread—that's not incidental to my argument. That is my argument.

@hongminhee

I'm writing this in English.

Do you, though? Your writing style was different in the past, so I am pretty sure that you now machine-translate, or perhaps use an LLM writing assistant.

To be honest, the non-slop version of you was much better.

@silverpill Yes, I used an LLM to help write it. I wrote my thoughts in Korean first, then had it translated. That's kind of the whole point I was making.

I'm not a native English speaker. When I write long-form English on my own, it's slow and the result is often not what I actually meant. Using a tool to bridge that gap doesn't make the thoughts less mine. It makes them more accurately mine, not less. A non-native speaker hiring a copy editor wouldn't get this reaction.

I'll grant you that “the non-slop version of you” stings a little. But I'd rather be legible and called slop than be authentic and misread.

@hongminhee Authenticity matters. When I see slop I usually just ignore it, because reading it is like watching paint dry, and I think I am not alone in that.
You're basically the only person with whom I continue to communicate despite all of this.
@hongminhee @silverpill Hi. I'm curious (as a non-native english speaker on the other side of the argument), what gives you the confidence that machine translation won't be misread ?
I'd be way less secure about my criticism of MT if the tools were able to probe the author for meaning but we're not quite there, and I think that MT in the hands of a polyglot-ish author has better chances of being somewhat useful (at least it's a huge difference from unedited/unverified client-side translations).
@hongminhee @silverpill I really think there could be a lot to do in terms of bridging the fluency gap in terms of UX. You refer to your experience flipping pages of dictionaries, and I relate to that quite hard : that's where I'd like to see effort and change in software.
However, I feel comfortable bearing the responsibility of making my speech accessible to an English or Spanish speaker that doesn't speak French, and any failure would be mine.

@ddelemeny @silverpill The confidence comes from an asymmetry I suspect many non-native speakers will recognize: I can read English much better than I can write it.

When I write in English on my own, I often know, as I'm writing, that something is off—that the sentence doesn't carry the weight I intended, or that the nuance I wanted is somewhere between the words I've chosen. I just don't always know how to fix it. When I write in Korean first and then work with an LLM, I can read the result and check it against what I meant. Sometimes I'll see a phrase and think: yes, exactly that, I didn't know how to get there myself. That moment of recognition is the verification step.

So I'm not trusting the machine blindly. I'm using my reading ability—which is reasonably good—to audit an output that my writing ability couldn't have produced alone. It's an imperfect process, but it's not as unmoored as handing a text to a system and walking away.

Your point about polyglot authors is well taken. The tool works better when the person using it can actually evaluate what it produces. I'd agree that's a meaningful distinction.

@hongminhee @silverpill I see. One personal reason I don't want to rely on translators and prefer the "hard" way, is that I believe my reading and understanding is sharpened by my attempts at writing. That's the essence of the "immersion in a language" argument for me, and I have experienced it several times (positively by being immersed in English and Spanish speaking cultures, and negatively by lack of it in German and Korean). Do you relate to that ?
@hongminhee @silverpill How do you think translators shape or maintain your abilities in a foreign language, as opposed to research and experimentation ?

@ddelemeny @silverpill I relate to the immersion argument, and I think it's part of why I avoided machine translation for so long—not out of principle, but because the output wasn't worth learning from. Older MT between Korean and English produced something closer to a word-by-word skeleton than actual language. You couldn't look at it and think: oh, that's how a native speaker would put it. It was more like a scaffold you had to tear down before building anything.

LLMs are different enough that I've had to revise that instinct. The output is often genuinely idiomatic, and when I read a phrase that lands exactly right, there's a recognition that functions a lot like learning—the same feeling as encountering a sentence in a book and thinking: I'll remember that. I do find myself absorbing expressions that way, probably more than I would have expected.

That said, I think your point holds at the edges. For shorter writing I still work without assistance, partly for practical reasons and partly because I notice the difference when I don't. So I suspect I'm arriving at something similar to what you're describing, just from the other direction—using the tool for longer texts while trying to keep the muscle from atrophying entirely on shorter ones.

The dynamic you mention with German and Korean is interesting too. Korean was my concern about English; I imagine the lack of immersion shapes the experience in ways that are hard to compensate for with tools alone.

@hongminhee @silverpill Thank you for replying with care, your POV is really interesting.
Have you read the Reg's article about semantic ablation that was shared around some time ago ?

https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/

This may be part of why other commenters allegedly found a negative difference in your writing using MT, and a concern I have about building fluent writing skills. MT (and autocomplete) collapse possibilities into an average, that's probably correct enough but also probably low entropy.

Why AI writing is so generic, boring, and dangerous: Semantic ablation

opinion: The subtractive bias we're ignoring

The Register
@hongminhee @silverpill your writing short form on your own gives you a regular workout, that's something I think is very necessary when machine assistance is involved.
On semantic collapse, I am not sure there's an equivalent practice that preserves an author from clearing a lower bar than intended and settling for a "good enough". "Good enough" is a complex object here, and while I don't challenge your ability to keep a higher standard, I fear the subjective self-indulgent bias in general.

@ddelemeny That's a useful framing, and the article is worth reading. The concern about entropy collapse is real—I've seen it happen when native speakers run their own writing through a model and get something smoother but somehow emptier back.

My situation is a bit different, though. The high-entropy original is in Korean. The LLM's job is to carry that across, not to sand it down. Whether it succeeds is a fair question, but the direction of the process matters. I'm not polishing a draft into blandness; I'm trying to get something that exists in one language to exist in another without losing its shape.

Anyway, this has been a genuinely interesting exchange. Thank you for the link.

@hongminhee oh I'm glad I didn't waste your time! Thanks for engaging in the conversation!
@[email protected] do you think your writing skills will improve with continued reading of the LLM-reflected translations, to the point where you may no longer need it?
@julian I actually just addressed something close to this in a reply up the thread—might be worth a read!

@[email protected] thanks, good answer 

I would wonder then that maybe you might end up sounding like an LLM, then. Best interject some of your own style later on 

@julian Yeah, that's why I'm still writing short words myself. My accent won't go anywhere!

@silverpill @hongminhee

I don’t interject this as an attack, but please realize that when you say “AI slop” you say “sloppy person who uses AI”.

@hongminhee very clearly is not such a person, so please don’t imply they are, even if they chose an assistant you disapprove of to help them communicate.

I am irritated by the term “AI slop” because it shifts the responsibility from the user to their tool, from the way they use the tool to something that’s inevitable.

@lain_7 @hongminhee I focus on the tool because it seems that a lot of people who use this tool remain unaware of how it affects them. Maybe they are aware and just don't care, maybe even majority of them don't care, but in this particular case I was assuming the former.

@silverpill @hongminhee

I think it’s wrong to focus on the tool — since it shifts the “blame”. A tool can be used well, or it can be used carelessly. It’s the person that decides how the tool is used.

I have to admit that, at least in coding, AI can overwhelm a person trying to use it carefully, but that doesn’t excuse *that person* submitting a careless, sloppy pull-request.

@silverpill @hongminhee DeepL or some LLM translation assisants often use hyphens to replace commas or semicolons, so I usually use this feature to confirm if a text is machine-translate.