Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks

About

Current state-of-the-art vision-and-language models are evaluated on tasks either individually or in a multi-task setting, overlooking the challenges of continually learning (CL) tasks as they arrive. Existing CL benchmarks have facilitated research on task adaptation and mitigating "catastrophic forgetting", but are limited to vision-only and language-only tasks. We present CLiMB, a benchmark to study the challenge of learning multimodal tasks in a CL setting, and to systematically evaluate how upstream continual learning can rapidly generalize to new multimodal and unimodal tasks. CLiMB includes implementations of several CL algorithms and a modified Vision-Language Transformer (ViLT) model that can be deployed on both multimodal and unimodal tasks. We find that common CL methods can help mitigate forgetting during multimodal task learning, but do not enable cross-task knowledge transfer. We envision that CLiMB will facilitate research on a new class of CL algorithms for this challenging multimodal setting.

Tejas Srinivasan, Ting-Yun Chang, Leticia Leonor Pinto Alva, Georgios Chochlakis, Mohammad Rostami, Jesse Thomason• 2022

Related benchmarks

TaskDatasetResultRank
Exemplar-Free Class-Incremental LearningCIFAR-100
Avg Top-1 Inc Acc39.02
68
Exemplar-Free Class-Incremental LearningTinyImageNet
Top-1 Acc (Inc)32.1
62
Exemplar-Free Class-Incremental LearningCIFAR-100 Big start
Average Incremental Accuracy (Aavg)33.4
39
Natural Language Visual ReasoningNLVR2
Accuracy72.85
21
Exemplar-Free Class-Incremental LearningCIFAR-100 Equally split
Aavg39.2
15
Visual Question AnsweringVQA v2
Accuracy67.89
8
Visual EntailmentSNLI-VE
Accuracy74.6
4
Showing 7 of 7 rows

Other info

Code

Follow for update