Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SPILL: Domain-Adaptive Intent Clustering based on Selection and Pooling with Large Language Models

About

In this paper, we propose Selection and Pooling with Large Language Models (SPILL), an intuitive and domain-adaptive method for intent clustering without fine-tuning. Existing embeddings-based clustering methods rely on a few labeled examples or unsupervised fine-tuning to optimize results for each new dataset, which makes them less generalizable to multiple datasets. Our goal is to make these existing embedders more generalizable to new domain datasets without further fine-tuning. Inspired by our theoretical derivation and simulation results on the effectiveness of sampling and pooling techniques, we view the clustering task as a small-scale selection problem. A good solution to this problem is associated with better clustering performance. Accordingly, we propose a two-stage approach: First, for each utterance (referred to as the seed), we derive its embedding using an existing embedder. Then, we apply a distance metric to select a pool of candidates close to the seed. Because the embedder is not optimized for new datasets, in the second stage, we use an LLM to further select utterances from these candidates that share the same intent as the seed. Finally, we pool these selected candidates with the seed to derive a refined embedding for the seed. We found that our method generally outperforms directly using an embedder, and it achieves comparable results to other state-of-the-art studies, even those that use much larger models and require fine-tuning, showing its strength and efficiency. Our results indicate that our method enables existing embedders to be further improved without additional fine-tuning, making them more adaptable to new domain datasets. Additionally, viewing the clustering task as a small-scale selection problem gives the potential of using LLMs to customize clustering tasks according to the user's goals.

I-Fan Lin, Faegheh Hasibi, Suzan Verberne• 2025

Related benchmarks

TaskDatasetResultRank
Short Text ClusteringClinc150 (test)
NMI93.77
23
Short Text ClusteringBank 77 (test)
NMI85.01
22
Short Text ClusteringMASSIVE (test)
NMI77.62
20
Short Text ClusteringmTOP (test)
NMI72.65
20
Showing 4 of 4 rows

Other info

Code

Follow for update