Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond the Unit Hypersphere: Embedding Magnitude in Contrastive Learning

About

Cosine similarity is prevalent in contrastive learning, yet it makes an implicit assumption: embedding magnitude is noise. Prior work occasionally found dot product and cosine similarity comparable, but left unanswered WHAT information magnitude carries, WHEN it helps, and HOW to leverage it. We conduct a systematic study through a $2 \times 2$ ablation that independently controls input-side and output-side normalization across text and vision models. Our findings reveal three key insights. First, in text retrieval, output (document) magnitude strongly correlates with relevance (Cohen's $d$ up to 1.80), yielding the largest gains on reasoning-intensive tasks. Second, input and output magnitudes serve asymmetric roles: output magnitude directly scales similarity scores while input magnitude modulates training dynamics. Third, magnitude learning benefits asymmetric tasks (text retrieval, RAG) but harms symmetric tasks (STS, text-image alignment). These findings establish a task symmetry principle: the choice between cosine and dot product depends on whether the task has distinct input roles, enabling cost-free improvements by simply removing an unnecessary constraint.

Xincan Feng, Taro Watanabe• 2026

Related benchmarks

TaskDatasetResultRank
Information RetrievalBEIR--
59
Information RetrievalTREC DL 19
nDCG@1060.43
40
RetrievalBRIGHT 12 datasets aggregate (test)
NDCG@1012.74
20
Information RetrievalTREC DL20
NDCG@1059.69
19
Question AnsweringHotpotQA (test)
EM32.7
18
Information RetrievalMS MARCO (dev)
NDCG@1032.92
12
Information RetrievalMulti-hop
NDCG@1058.16
12
Open-domain Question AnsweringNQ 3.5K (test)
EM0.261
5
Open-domain Question AnsweringTriviaQA 11.3K (test)
EM40.2
5
Showing 9 of 9 rows

Other info

Follow for update