Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Upsample or Upweight? Balanced Training on Heavily Imbalanced Datasets

About

Data abundance across different domains exhibits a long-tailed distribution: few domains have abundant data, while most face data scarcity. Our work focuses on a multilingual setting, where available data is heavily skewed towards high-resource languages. Two common strategies to address this disparity are upsampling low-resource data (Temperature Sampling) and upweighting low-resource loss (Scalarization). These methods are often assumed to be equivalent, but this equivalence has not been rigorously established, prompting our investigation. Through theoretical and empirical analysis, we identify when these two methods are equivalent and when they diverge. We prove that they are equivalent under full gradient descent but differ under stochastic gradient descent due to differences in gradient variance. Specifically, Temperature Sampling exhibits lower variance in gradient estimation compared to Scalarization, leading to faster convergence but a higher risk of overfitting. Based on these insights, we propose Cooldown, a strategy that starts by heavily upsampling low-resource languages to accelerate convergence and gradually reduces the upsampling to prevent overfitting -- achieving the best of both worlds. Our method competes effectively with existing data re-weighting techniques while offering computational efficiency.

Tianjian Li, Haoran Xu, Weiting Tan, Kenton Murray, Daniel Khashabi• 2024

Related benchmarks

TaskDatasetResultRank
Multilingual Long Document RetrievalMLDR 13 (test)
NDCG@1052.7
18
Text RetrievalBEIR-5 all-MiniLM-L6-v2 (test)
Average NDCG@1047.6
14
Showing 2 of 2 rows

Other info

Follow for update