Question for my fellow freelance translators: Do you do post-editing or do you refuse on principle? If you do take post-editing work, do you charge a higher hourly rate (than, say, for revising layout proofs of your own work)?

I've managed to avoid post-editing thus far, but a long-time client just asked me to proof a document they "translated internally." Not quite sure what to do going forward.

#translate
#translation
#xl8
#postediting
#xl8freelancer

Why no Markdown in posts‽‽

It's 2025 fercryinoutloud!

#markdown #postediting

Using LLMs as evaluators looks like a very interesting and promising direction, enabling simpler automatic post-editing pipelines. For those interested in fine-grained MT evaluation and APE, I recommend checking out this paper by Lu et al. (2024): https://arxiv.org/abs/2409.14335
#MT #postediting #NLP #AI #evaluation #LLM
MQM-APE: Toward High-Quality Error Annotation Predictors with Automatic Post-Editing in LLM Translation Evaluators

Large Language Models (LLMs) have shown significant potential as judges for Machine Translation (MT) quality assessment, providing both scores and fine-grained feedback. Although approaches such as GEMBA-MQM have shown state-of-the-art performance on reference-free evaluation, the predicted errors do not align well with those annotated by human, limiting their interpretability as feedback signals. To enhance the quality of error annotations predicted by LLM evaluators, we introduce a universal and training-free framework, $\textbf{MQM-APE}$, based on the idea of filtering out non-impactful errors by Automatically Post-Editing (APE) the original translation based on each error, leaving only those errors that contribute to quality improvement. Specifically, we prompt the LLM to act as 1) $\textit{evaluator}$ to provide error annotations, 2) $\textit{post-editor}$ to determine whether errors impact quality improvement and 3) $\textit{pairwise quality verifier}$ as the error filter. Experiments show that our approach consistently improves both the reliability and quality of error spans against GEMBA-MQM, across eight LLMs in both high- and low-resource languages. Orthogonal to trained approaches, MQM-APE complements translation-specific evaluators such as Tower, highlighting its broad applicability. Further analysis confirms the effectiveness of each module and offers valuable insights into evaluator design and LLMs selection.

arXiv.org
VibesInMarch

YouTube

Do you edit your longer posts? To make sure that you’re under the word count and the post still made sense.

#WordCount #PostEditing #PainInTheButt

Today, exactly one year since I launched #Fairslator, I am launching a parallel project called CAPE·MT (Computer-Aided #PostEditing of #MachineTranslation) for translators who want to make their #MTPE work less boring. https://www.cape.mt/
CAPE·MT

Computer-Aided Post-Editing of Machine Translation

CAPE·MT

Edited post notifications have informed me that an art piece sold once the poster stated exactly that.

Practical Advantages

#PostEditing #EditingPosts #EditToot #PostEditNotification

We'll be presenting DivEMT at the 11AM in-person poster session today at #EMNLP22! 🌍 Come have a look at our new multilingual #NMT and #postediting resource! #nlproc