Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Large Language Models Enable Few-Shot Clustering

About

Unlike traditional unsupervised clustering, semi-supervised clustering allows users to provide meaningful structure to the data, which helps the clustering algorithm to match the user's intent. Existing approaches to semi-supervised clustering require a significant amount of feedback from an expert to improve the clusters. In this paper, we ask whether a large language model can amplify an expert's guidance to enable query-efficient, few-shot semi-supervised text clustering. We show that LLMs are surprisingly effective at improving clustering. We explore three stages where LLMs can be incorporated into clustering: before clustering (improving input features), during clustering (by providing constraints to the clusterer), and after clustering (using LLMs post-correction). We find incorporating LLMs in the first two stages can routinely provide significant improvements in cluster quality, and that LLMs enable a user to make trade-offs between cost and accuracy to produce desired clusters. We release our code and LLM prompts for the public to use.

Vijay Viswanathan, Kiril Gashteovski, Carolin Lawrence, Tongshuang Wu, Graham Neubig• 2023

Related benchmarks

TaskDatasetResultRank
Intent ClassificationBanking77 (test)
Accuracy72
151
Short Text ClusteringTweet
Accuracy61.8
28
Short Text ClusteringClinc150 (test)
NMI92.6
23
Short Text ClusteringBank 77 (test)
NMI82.4
22
ClusteringBank77
NMI83.4
19
ClusteringCLINC
Accuracy84.1
15
Dialogue Intent ClusteringChinese dialogue intent dataset (test)
NMI Gain5.97
12
retrieval judgmentRAL2M covidqa, expertqa, hagrid, hotpotqa, msmarco (test)
Accuracy58.3
10
ClusteringAdobe Lightroom (test)
Accuracy71.5
9
ClusteringOpenAI Codex (test)
Accuracy40.3
9
Showing 10 of 10 rows

Other info

Follow for update