Cluster-R1: Large Reasoning Models Are Instruction-following Clustering Agents
About
General-purpose embedding models excel at recognizing semantic similarities but fail to capture the characteristics of texts specified by user instructions. In contrast, instruction-tuned embedders can align embeddings with textual instructions yet cannot autonomously infer latent corpus structures, such as determining the optimal number of clusters. To address both limitations, we reframe instruction-following clustering as a generative task and train large reasoning models (LRMs) as autonomous clustering agents. Our reasoning-driven training pipeline enables LRMs to interpret high-level clustering instructions and then infer the corresponding latent groupings. To evaluate this paradigm, we introduce ReasonCluster, a comprehensive benchmark comprising 28 diverse tasks spanning daily dialogue, legal cases, and financial reports. Experiments across diverse datasets and clustering scenarios show that our approach consistently outperforms strong embedding-based methods and LRM baselines, demonstrating that explicit reasoning fosters more faithful and interpretable instruction-based clustering.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Instruction-following clustering | LMSY C0 | V-measure66.9 | 13 | |
| Instruction-following clustering | LMSY C1 | V-measure66.47 | 13 | |
| Instruction-following clustering | ECHR C2 | V-measure66.39 | 13 | |
| Instruction-following clustering | SP500 C0 | V-measure69.35 | 13 | |
| Instruction-following clustering | SP500 C1 | V-measure68.08 | 13 | |
| Instruction-following clustering | REASONCLUSTER (Overall) | V-measure (%)68.42 | 13 | |
| Instruction-following clustering | LMSY C2 | V-measure62.65 | 13 | |
| Instruction-following clustering | ECHR C1 | V-measure84.8 | 13 | |
| Instruction-following clustering | ECHR C0 | V-measure84.05 | 13 | |
| Instruction-following clustering | SP500 C2 | V-measure60.28 | 13 |