Gosia Lazuka, Andreea Simona Anghel, et al.
SC 2024
Recent advances in large pretrained language models have increased attention to zero-shot text classification. In particular, models fine-tuned on natural language inference datasets have been widely adopted as zero-shot classifiers due to their promising results and off-the-shelf availability. However, the fact that such models are unfamiliar with the target task can lead to instability and performance issues. We propose a plug-and-play method to bridge this gap using a simple self-training approach, requiring only the class names along with an unlabeled dataset, and without the need for domain expertise or trial and error. We show that fine-tuning the zero-shot classifier on its most confident predictions leads to significant performance gains across a wide range of text classification tasks, presumably since self-training adapts the zero-shot model to the task at hand.
Gosia Lazuka, Andreea Simona Anghel, et al.
SC 2024
George Kour, Samuel Ackerman, et al.
EMNLP 2022
Natalia Martinez Gil, Dhaval Patel, et al.
UAI 2024
Shubhi Asthana, Pawan Chowdhary, et al.
KDD 2021