End of Japanese community at Mozilla due to the introduction of AI-based translation.

The community members have expressed disappointment and frustration that their long term volunteer efforts and local knowledge were being replaced by machine translation, which they felt did not match the quality of human provided support.

This is why Mozilla sucks so much, they are going crazy like rest of the industry.

Source
https://support.mozilla.org/en-US/forums/contributors/717446

Added screenshot in case Mozilla decided to remove it

@nixCraft

Machine translation of a large document repository? That’s going to be a disaster. And I know Mozilla employs enough bilingual (or more) people to know that.

@david_chisnall @nixCraft the thread also mentions a bug where the bot undid already translated elements and resets them to English
... That means the bot should've been turned off immediately to prevent further damage, but I don't think that has been done.I can see how that bug together with the automated translation will drive contributors away. Machine translation could be useful, but perhaps only as a sanity check (reverse translate back to English and compare), and should only be used if the contributors actually want it.

@edwintorok @nixCraft

That bit bothers me the least. Lots of systems have bugs. The issue here for me is that they have a load of experts who understand the problem, and someone who does not understand the problem has mandated a tool that does not solve the problem and entirely disregarded the value of the experts.

Machine-assisted translation tooling primarily focuses on building, maintaining, and using a term dictionary: a set of prior translations that ensure that you consistently translate terms of art in the same way. If you don't do this, you get something that is technically a valid translation, but which is completely useless because the same term is translated in different ways throughout the document (based on surrounding context and translator preferences) and so it's impossible for a reader to tell that they're the same term.

It sounds like the Japanese translators have put a lot of effort into solving this problem. LLM-based translation is infamous for not doing this. It will translate terms based on how, across the training corpus, that term was translated when adjacent to other words. This is completely fine for short, low-stakes translation. If I want to translate a menu while travelling, for example, an LLM will typically give a good output (maybe don't trust it if you have serious allergies, but for the rest of us it's fine). But for something where you want to communicate technical content (in any domain), they're (at best) a good first approximation. And translators have repeatedly reported that cleaning up LLM translations is more work than doing the translation well in the first place.