Learning More Generalized Experts by Merging Experts in Mixture-of-Experts
About
We observe that incorporating a shared layer in a mixture-of-experts can lead to performance degradation. This leads us to hypothesize that learning shared features poses challenges in deep learning, potentially caused by the same feature being learned as various different features. To address this issue, we track each expert's usage frequency and merge the two most frequently selected experts. We then update the least frequently selected expert using the combination of experts. This approach, combined with the subsequent learning of the router's expert selection, allows the model to determine if the most frequently selected experts have learned the same feature differently. If they have, the combined expert can be further trained to learn a more general feature. Consequently, our algorithm enhances transfer learning and mitigates catastrophic forgetting when applied to multi-domain task incremental learning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Few-shot Image Classification | DTD | Accuracy64 | 42 | |
| Few-shot Image Classification | SUN397 | Accuracy72.5 | 36 | |
| Image Classification | Food few-shot | Accuracy88.8 | 32 | |
| Image Classification | Stanford Cars few-shot | Score (%)69.5 | 32 | |
| Image Classification | EuroSAT few-shot | Accuracy82.3 | 32 | |
| Image Classification | CIFAR100 few-shot | Accuracy74.9 | 32 | |
| Image Classification | Flowers few-shot | Score (%)89.4 | 32 | |
| Image Classification | OxfordPet few-shot | Score (%)89.1 | 32 | |
| Multi-Task Incremental Learning | MTIL Aircraft, Caltech101, CIFAR100, DTD, EuroSAT, Flowers, Food, MNIST, OxfordPet, Cars, SUN397 | Caltech101 Accuracy94.7 | 32 | |
| Image Classification | MNIST few-shot | Accuracy (few-shot)89 | 32 |