Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Not All Negatives are Equal: Label-Aware Contrastive Loss for Fine-grained Text Classification

About

Fine-grained classification involves dealing with datasets with larger number of classes with subtle differences between them. Guiding the model to focus on differentiating dimensions between these commonly confusable classes is key to improving performance on fine-grained tasks. In this work, we analyse the contrastive fine-tuning of pre-trained language models on two fine-grained text classification tasks, emotion classification and sentiment analysis. We adaptively embed class relationships into a contrastive objective function to help differently weigh the positives and negatives, and in particular, weighting closely confusable negatives more than less similar negative examples. We find that Label-aware Contrastive Loss outperforms previous contrastive methods, in the presence of larger number and/or more confusable classes, and helps models to produce output distributions that are more differentiated.

Varsha Suresh, Desmond C. Ong• 2021

Related benchmarks

TaskDatasetResultRank
Crisis ClassificationCrisisOltea v1 (test)
Macro F195.87
14
Out-of-domain performance averageAverage Out-of-Domain
Macro F174.37
14
Twitter dataset performance averageAverage In-Domain
Macro F176.3
14
Hate Speech DetectionHateWas v1 (test)
Macro F156.96
14
Emotion DetectionEmoMoham v1 (test)
Macro F1 Score77.66
14
Single-label ClassificationD_GE (test)
Accuracy65.2
6
Showing 6 of 6 rows

Other info

Follow for update