๐Ÿš€ ๐—Ÿ๐—ฒ๐˜ ๐—Ÿ๐—Ÿ๐— ๐˜€ ๐—ฑ๐—ฒ๐—ฐ๐—ถ๐—ฑ๐—ฒ ๐˜๐—ต๐—ฒ๐—ถ๐—ฟ ๐—น๐—ผ๐—ป๐—ด-๐—ฐ๐—ผ๐—ป๐˜๐—ฒ๐˜…๐˜ ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด ๐—ฑ๐—ฎ๐˜๐—ฎ!
Our data curation method lets the model downweight tokens that are not useful for context extension, questioning the standard equal weighting of tokens.

#NAACL2025 #NLProc #AI #LLMs

(1/๐Ÿงต)

๐Ÿ“„: arxiv.org/abs/2503.09202

Our contrastive token weights (Tab. 1) focus on long-range dependencies but avoid the model forgetting about short-context (< 8k). This trade-off is mirrored in downstream performance on long-context (RULER, Longbench) vs. short-context (MMLU) datasets.

(2/๐Ÿงต)

As an alternative to scoring the tokens with the context-shortened model, we also tried to use a frozen model for scoring. This approach performs similarly but is more robust.

Additionally, a much smaller model can be used here without much performance degradation!๐Ÿš€

(3/๐Ÿงต)

Check out the paper below for a detailed methodological discussion and extensive experiments or get started with the code right away!

๐Ÿ“„ Paper: https://arxiv.org/abs/2503.09202
๐Ÿ’ป Code: https://github.com/UKPLab/naacl2025-token-weighting

(4/๐Ÿงต)

Token Weighting for Long-Range Language Modeling

Many applications of large language models (LLMs) require long-context understanding, but models continue to struggle with such tasks. We hypothesize that conventional next-token prediction training could contribute to this, because each token is assigned equal weight. Yet, intuitively, the amount of context needed to predict the next token accurately varies greatly across different data. To reflect this, we propose various novel token-weighting schemes that assign different weights to each training token in the loss, thereby generalizing existing works. For this, we categorize token-weighting methods using a two-step framework which compares the confidences of a long-context and short-context model to score tokens. We evaluate all methods on multiple long-context understanding tasks and show that non-uniform loss weights are helpful to improve the long-context abilities of LLMs. Different short-context models can be used effectively for token scoring, including models that are much smaller than the long-context model that is trained. All in all, this work contributes to a better understanding of the trade-offs long-context language modeling faces and provides guidelines for model steering via loss-weighting based on empirical evidence. The code can be found on Github.

arXiv.org

And consider following the authors Falko Helm, Nico Daheim & Iryna Gurevych if you are interested in more information or an exchange of ideas. (5/5)

See you this week in Albuquerque ๐ŸŒต! #NAACL2025