Tokenization for language modeling: BPE vs. Unigram Language Modeling (2020)
https://ndingwall.github.io/blog/tokenization
#HackerNews #Tokenization #LanguageModeling #BPE #Unigram #NLP

Tokenization for language modeling: Byte Pair Encoding vs Unigram Language Modeling
Tokenizers used by the best-performing language models (Bert, GPT-2, etc.) poorly reflect the morphology of English text. I had hoped to use some quarantine time to design one that more closely aligns to relationships between wordforms. But Kaj Bostrom and Greg Durrett beat me to it and so this blog post materialized instead. I add some additional motivation, evaluate both methods against ‘gold standard’ tokenizations, and speculate about what might come next.