You can find our paper here:
📃 https://arxiv.org/abs/2311.00408
and our code here:
💻 https://github.com/UKPLab/AdaSent

Check out the work of our authors Yongxin Huang, Kexin Wang, Sourav Dutta, Raj Nath Patel, Goran Glavaš and Iryna Gurevych! (6/🧵) #EMNLP2023 #AdaSent #NLProc

AdaSent: Efficient Domain-Adapted Sentence Embeddings for Few-Shot Classification

Recent work has found that few-shot sentence classification based on pre-trained Sentence Encoders (SEs) is efficient, robust, and effective. In this work, we investigate strategies for domain-specialization in the context of few-shot sentence classification with SEs. We first establish that unsupervised Domain-Adaptive Pre-Training (DAPT) of a base Pre-trained Language Model (PLM) (i.e., not an SE) substantially improves the accuracy of few-shot sentence classification by up to 8.4 points. However, applying DAPT on SEs, on the one hand, disrupts the effects of their (general-domain) Sentence Embedding Pre-Training (SEPT). On the other hand, applying general-domain SEPT on top of a domain-adapted base PLM (i.e., after DAPT) is effective but inefficient, since the computationally expensive SEPT needs to be executed on top of a DAPT-ed PLM of each domain. As a solution, we propose AdaSent, which decouples SEPT from DAPT by training a SEPT adapter on the base PLM. The adapter can be inserted into DAPT-ed PLMs from any domain. We demonstrate AdaSent's effectiveness in extensive experiments on 17 different few-shot sentence classification datasets. AdaSent matches or surpasses the performance of full SEPT on DAPT-ed PLM, while substantially reducing the training costs. The code for AdaSent is available.

arXiv.org

Need a lightweight solution for few-shot domain-specific sentence classification?

We propose #AdaSent!
🚀 Up to 7.2 acc. gain in 8-shot classification with 10K unlabeled data
🪶 Small backbone with 82M parameters
🧩 Reusable general sentence adapter across domains
(1/🧵) #EMNLP2023