Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Efficient Continual Learning for Small Language Models with a Discrete Key-Value Bottleneck

About

Continual learning remains a challenge across various natural language processing (NLP) tasks, as models updated with new training data often risk catastrophic forgetting of previously acquired knowledge. We introduce a discrete key-value bottleneck (DKVB) for encoder-only language models, enabling efficient continual learning through localized updates. Inspired by a discrete key-value bottleneck in vision, we consider new and NLP-specific challenges. We compare different bottleneck architectures for NLP and introduce a new, task-independent initialization technique for the discrete keys. We evaluate our DKVB for NLP in four continual learning scenarios and show that it alleviates catastrophic forgetting. Our experiments demonstrate that the proposed approach achieves competitive performance compared to popular continual learning methods while incurring lower computational costs. Furthermore, we show that DKVB remains effective even in challenging single-head continual learning scenarios where no task ID is provided.

Andor Diera, Lukas Galke, Fabian Karl, Ansgar Scherp• 2024

Related benchmarks

TaskDatasetResultRank
Class-incremental learning20NG (test)
Accuracy97.06
13
Domain-incremental learningDSC
Average Backward Transfer (BWT)6.24
13
Task-Incremental Learning4GLUE
Average Backward Transfer (BWT)21.3
13
Class-incremental learning20NG
Average Backward Transfer (BWT)29.05
13
Task-Incremental Learning4GLUE (test)
Accuracy69.65
13
Domain-incremental learningDSC (test)
Accuracy83.93
13
Showing 6 of 6 rows

Other info

Follow for update