Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Prompt Candidates, then Distill: A Teacher-Student Framework for LLM-driven Data Annotation

About

Recently, Large Language Models (LLMs) have demonstrated significant potential for data annotation, markedly reducing the labor costs associated with downstream applications. However, existing methods mostly adopt an aggressive strategy by prompting LLM to determine a single gold label for each unlabeled sample. Due to the inherent uncertainty within LLMs, they often produce incorrect labels for difficult samples, severely compromising the data quality for downstream applications. Motivated by ambiguity aversion in human behaviors, we propose a novel candidate annotation paradigm wherein large language models are encouraged to output all possible labels when incurring uncertainty. To ensure unique labels are provided for downstream tasks, we develop a teacher-student framework CanDist that distills candidate annotations with a Small Language Model (SLM). We further provide a rigorous justification demonstrating that distilling candidate annotations from the teacher LLM offers superior theoretical guarantees compared to directly using single annotations. Extensive experiments across six text classification tasks validate the effectiveness of our proposed method. The source code is available at https://github.com/MingxuanXia/CanDist.

Mingxuan Xia, Haobo Wang, Yixuan Li, Zewei Yu, Jindong Wang, Junbo Zhao, Runze Wu• 2025

Related benchmarks

TaskDatasetResultRank
Topic ClassificationAG News (test)
Accuracy89.46
98
Ontology ClassificationDBPedia (test)
Accuracy98.72
53
Content Type ClassificationRCT (test)
Accuracy70.57
11
Intent ClassificationBanking (BANK) (test)
Accuracy76.27
11
Medical Diagnosis ClassificationMedical Abstract (MA) (test)
Accuracy64.23
11
Topic ClassificationTREC (test)
Accuracy87.8
11
Content Type ClassificationRCT (train)
Accuracy68.9
10
Intent ClassificationBanking (BANK) (train)
Accuracy72.94
10
Medical Diagnosis ClassificationMedical Abstract (MA) (train)
Accuracy0.6376
10
News topic classificationAGNews (AGN) (train)
Accuracy89.91
10
Showing 10 of 12 rows

Other info

Code

Follow for update