Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Continual Learning of Context-dependent Processing in Neural Networks

About

Deep neural networks (DNNs) are powerful tools in learning sophisticated but fixed mapping rules between inputs and outputs, thereby limiting their application in more complex and dynamic situations in which the mapping rules are not kept the same but changing according to different contexts. To lift such limits, we developed a novel approach involving a learning algorithm, called orthogonal weights modification (OWM), with the addition of a context-dependent processing (CDP) module. We demonstrated that with OWM to overcome the problem of catastrophic forgetting, and the CDP module to learn how to reuse a feature representation and a classifier for different contexts, a single network can acquire numerous context-dependent mapping rules in an online and continual manner, with as few as $\sim$10 samples to learn each. This should enable highly compact systems to gradually learn myriad regularities of the real world and eventually behave appropriately within it.

Guanxiong Zeng, Yang Chen, Bo Cui, Shan Yu• 2018

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 Split
Accuracy77.07
61
Class-incremental learningCIFAR10 (test)
Average Accuracy75.39
59
Continual LearningCIFAR-100 (10-split)
ACC68.89
42
Class-incremental learningMNIST (test)
Average Accuracy96.3
35
Continual LearningTinyImageNet 25-split
ACC49.98
29
Continual LearningSplit CIFAR-100 20 tasks
Mean Test Accuracy68.47
26
Image ClassificationMNIST Split
Test Accuracy99.36
24
Class-incremental learningTiny-ImageNet standard (test)
Average Accuracy40.29
20
Continual LearningMNIST permuted
AT90.71
19
Continual Image ClassificationCIFAR100 Split
Accuracy50.94
17
Showing 10 of 16 rows

Other info

Code

Follow for update