Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Task-Free Continual Learning

About

Methods proposed in the literature towards continual deep learning typically operate in a task-based sequential learning setup. A sequence of tasks is learned, one at a time, with all data of current task available but not of previous or future tasks. Task boundaries and identities are known at all times. This setup, however, is rarely encountered in practical applications. Therefore we investigate how to transform continual learning to an online setup. We develop a system that keeps on learning over time in a streaming fashion, with data distributions gradually changing and without the notion of separate tasks. To this end, we build on the work on Memory Aware Synapses, and show how this method can be made online by providing a protocol to decide i) when to update the importance weights, ii) which data to use to update them, and iii) how to accumulate the importance weights at each update step. Experimental results show the validity of the approach in the context of two applications: (self-)supervised learning of a face recognition model by watching soap series and learning a robot to avoid collisions.

Rahaf Aljundi, Klaas Kelchtermans, Tinne Tuytelaars• 2018

Related benchmarks

TaskDatasetResultRank
Online Continual LearningCIFAR-100
AAUC52.93
20
Online Continual LearningCIFAR-10
Average AUC75.89
20
Online Continual LearningTinyImageNet
AAUC37.81
18
Online Continual LearningImageNet-200
AAUC38.28
18
Online Continual LearningImageNet-1K (Disjoint)
AAUC31.68
9
Online Continual LearningImageNet-1K Gaussian-Scheduled
AAUC19.37
9
Showing 6 of 6 rows

Other info

Follow for update