Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DirMixE: Harnessing Test Agnostic Long-tail Recognition with Hierarchical Label Vartiations

About

This paper explores test-agnostic long-tail recognition, a challenging long-tail task where the test label distributions are unknown and arbitrarily imbalanced. We argue that the variation in these distributions can be broken down hierarchically into global and local levels. The global ones reflect a broad range of diversity, while the local ones typically arise from milder changes, often focused on a particular neighbor. Traditional methods predominantly use a Mixture-of-Expert (MoE) approach, targeting a few fixed test label distributions that exhibit substantial global variations. However, the local variations are left unconsidered. To address this issue, we propose a new MoE strategy, DirMixE, which assigns experts to different Dirichlet meta-distributions of the label distribution, each targeting a specific aspect of local variations. Additionally, the diversity among these Dirichlet meta-distributions inherently captures global variations. This dual-level approach also leads to a more stable objective function, allowing us to sample different test distributions better to quantify the mean and variance of performance outcomes. Building on this idea, we develop a general Latent Skill Finetuning (LSF) framework for parameter-efficient finetuning of foundation models. We provide implementations based on LoRA and Adapter. Theoretically, we derive upper bounds on the generalization error for both standard learning and PEFT. Under mild assumptions, we show that the variance-based regularization helps tighten these bounds. Furthermore, we prove that the covering number of the PEFT hypothesis class scales with the number of trainable parameters. Finally, extensive experiments on CIFAR-10-LT, CIFAR-100-LT, ImageNet-LT, and iNaturalist validate the effectiveness of DirMixE.

Zhiyong Yang, Qianqian Xu, Sicong Li, Zitai Wang, Xiaochun Cao, Qingming Huang• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet LT
Top-1 Acc (Forward-LT, IR=50)70.09
23
Test-Agnostic Long-tail RecognitionCIFAR-100-LT SADE Setting (test)
Accuracy Forward-LT (100)68.32
12
Long-tail Image ClassificationiNaturalist SADE's Setting 2018 (test)
Forward-LT 372.53
9
Long-tail recognitionCIFAR-10-LT Uniform
Accuracy Run 10.8324
9
Long-tail recognitionCIFAR-10-LT Backward-LT
Accuracy (Metric 1)89.39
9
Long-tail recognitionCIFAR-10-LT Overall
Mean Accuracy87.57
9
Long-tail recognitionImageNet-LT Forward-LT Ours Setting (test)
Accuracy (Run 1)70.13
9
Long-tail recognitionImageNet-LT Backward Ours Setting (test)
Accuracy 10.5559
9
Long-tail recognitionImageNet-LT Overall Ours Setting (test)
Mean Acc61.5
9
Long-tail recognitioniNaturalist Ours Setting 2018 (test)
Forward LT Accuracy 169.75
9
Showing 10 of 17 rows

Other info

Code

Follow for update