Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ColBERT-Zero: To Pre-train Or Not To Pre-train ColBERT models

About

Current state-of-the-art multi-vector models are obtained through a small Knowledge Distillation (KD) training step on top of strong single-vector models, leveraging the large-scale pre-training of these models. In this paper, we study the pre-training of multi-vector models and show that large-scale multi-vector pre-training yields much stronger multi-vector models. Notably, a fully ColBERT-pre-trained model, ColBERT-Zero, trained only on public data, outperforms GTE-ModernColBERT as well as its base model, GTE-ModernBERT, which leverages closed and much stronger data, setting new state-of-the-art for model this size. We also find that, although performing only a small KD step is not enough to achieve results close to full pre-training, adding a supervised step beforehand allows to achieve much closer performance while skipping the most costly unsupervised phase. Finally, we find that aligning the fine-tuning and pre-training setups is crucial when repurposing existing models. To enable exploration of our results, we release various checkpoints as well as code used to train them.

Antoine Chaffin, Luca Arnaboldi, Am\'elie Chatelain, Florent Krzakala• 2026

Related benchmarks

TaskDatasetResultRank
Information RetrievalBEIR v1.0.0 (test)
ArguAna53.07
55
Showing 1 of 1 rows

Other info

Follow for update