DirMixE: Harnessing Test Agnostic Long-tail Recognition with Hierarchical Label Vartiations
About
This paper explores test-agnostic long-tail recognition, a challenging long-tail task where the test label distributions are unknown and arbitrarily imbalanced. We argue that the variation in these distributions can be broken down hierarchically into global and local levels. The global ones reflect a broad range of diversity, while the local ones typically arise from milder changes, often focused on a particular neighbor. Traditional methods predominantly use a Mixture-of-Expert (MoE) approach, targeting a few fixed test label distributions that exhibit substantial global variations. However, the local variations are left unconsidered. To address this issue, we propose a new MoE strategy, DirMixE, which assigns experts to different Dirichlet meta-distributions of the label distribution, each targeting a specific aspect of local variations. Additionally, the diversity among these Dirichlet meta-distributions inherently captures global variations. This dual-level approach also leads to a more stable objective function, allowing us to sample different test distributions better to quantify the mean and variance of performance outcomes. Building on this idea, we develop a general Latent Skill Finetuning (LSF) framework for parameter-efficient finetuning of foundation models. We provide implementations based on LoRA and Adapter. Theoretically, we derive upper bounds on the generalization error for both standard learning and PEFT. Under mild assumptions, we show that the variance-based regularization helps tighten these bounds. Furthermore, we prove that the covering number of the PEFT hypothesis class scales with the number of trainable parameters. Finally, extensive experiments on CIFAR-10-LT, CIFAR-100-LT, ImageNet-LT, and iNaturalist validate the effectiveness of DirMixE.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | ImageNet LT | Top-1 Acc (Forward-LT, IR=50)70.09 | 23 | |
| Test-Agnostic Long-tail Recognition | CIFAR-100-LT SADE Setting (test) | Accuracy Forward-LT (100)68.32 | 12 | |
| Long-tail Image Classification | iNaturalist SADE's Setting 2018 (test) | Forward-LT 372.53 | 9 | |
| Long-tail recognition | CIFAR-10-LT Uniform | Accuracy Run 10.8324 | 9 | |
| Long-tail recognition | CIFAR-10-LT Backward-LT | Accuracy (Metric 1)89.39 | 9 | |
| Long-tail recognition | CIFAR-10-LT Overall | Mean Accuracy87.57 | 9 | |
| Long-tail recognition | ImageNet-LT Forward-LT Ours Setting (test) | Accuracy (Run 1)70.13 | 9 | |
| Long-tail recognition | ImageNet-LT Backward Ours Setting (test) | Accuracy 10.5559 | 9 | |
| Long-tail recognition | ImageNet-LT Overall Ours Setting (test) | Mean Acc61.5 | 9 | |
| Long-tail recognition | iNaturalist Ours Setting 2018 (test) | Forward LT Accuracy 169.75 | 9 |