Language Technology Group at the University of Oslo has two papers accepted at #ICLR2026!

- Dual Language Models: Balancing Training Efficiency and Overfitting Resilience by David Samuel and Lucas Charpentier

- Fluent Alignment with Disfluent Judges: Post-training for Lower-resource Languages by David Samuel, Lilja Øvrelid, Erik Velldal and Andrey Kutuzov

Details and links in the thread:

#NLProc #Norway #Norge #UiO

Fluent Alignment with Disfluent Judges: Post-training for Lower-resource Languages

https://arxiv.org/abs/2512.08777

Our method of post-training uses on-policy RL where the model trains exclusively on its own generated responses, guided by reward signals from an "judge" LLM model (that doesn't need to be be fluent). In a case study on #Norwegian #Bokmål with native-speaker evaluation, the on-policy approach was strongly preferred over both translated supervised fine-tuning and a multilingual baseline.

Fluent Alignment with Disfluent Judges: Post-training for Lower-resource Languages

We propose a post-training method for lower-resource languages that preserves fluency of language models even when aligned by disfluent reward models. Preference-optimization is now a well-researched topic, but previous work has mostly addressed models for English and Chinese. Lower-resource languages lack both datasets written by native speakers and language models capable of generating fluent synthetic data. Thus, in this work, we focus on developing a fluent preference-aligned language model without any instruction-tuning data in the target language. Our approach uses an on-policy training method, which we compare with two common approaches: supervised finetuning on machine-translated data and multilingual finetuning. We conduct a case study on Norwegian Bokmål and evaluate fluency through native-speaker assessments. The results show that the on-policy aspect is crucial and outperforms the alternatives without relying on any hard-to-obtain data.

arXiv.org