๐๏ธ Now out in Nature Communications:
Deep neural networks and humans both benefit from compositional structure.
w/ Yoav Ram and Limor Raviv
Assistant Professor of Data Science and Advanced Machine Learning at the University of Southern Denmark in Odense
Machine Learning, Natural Language Processing, Interpretability
Previously:
Postdoc @mpi_nl
PhD @ Kiel University, Germany
| Website | http://lpag.de |
| ORCID | https://orcid.org/0000-0001-6124-1092 |
| Google Scholar | https://scholar.google.de/citations?hl=en&user=AHGGdYQAAAAJ&view_op=list_works&sortby=pubdate |
๐๏ธ Now out in Nature Communications:
Deep neural networks and humans both benefit from compositional structure.
w/ Yoav Ram and Limor Raviv
Continual learning remains challenging across various natural language understanding tasks. When models are updated with new training data, they risk catastrophic forgetting of prior knowledge. In the present work, we introduce a discrete key-value bottleneck for encoder-only language models, allowing for efficient continual learning by requiring only localized updates. Inspired by the success of a discrete key-value bottleneck in vision, we address new and NLP-specific challenges. We experiment with different bottleneck architectures to find the most suitable variants regarding language, and present a generic discrete key initialization technique for NLP that is task independent. We evaluate the discrete key-value bottleneck in four continual learning NLP scenarios and demonstrate that it alleviates catastrophic forgetting. We showcase that it offers competitive performance to other popular continual learning methods, with lower computational costs.
We have some openings for PhD/Postdoc positions on multilingual language modeling at SDU's Centre for Machine Learning, Denmark. Topics go down to the core of pre-training and instruction tuning and adjacent topics such as efficient language modeling. Please consider to apply and/or reshare :)
We have some openings for PhD/Postdoc positions on multilingual language modeling at SDU's Centre for Machine Learning, Denmark. Topics go down to the core of pre-training and instruction tuning and adjacent topics such as efficient language modeling. Please consider to apply and/or reshare :)