Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Amalgamating Knowledge towards Comprehensive Classification

About

With the rapid development of deep learning, there have been an unprecedentedly large number of trained deep network models available online. Reusing such trained models can significantly reduce the cost of training the new models from scratch, if not infeasible at all as the annotations used for the training original networks are often unavailable to public. We propose in this paper to study a new model-reusing task, which we term as \emph{knowledge amalgamation}. Given multiple trained teacher networks, each of which specializes in a different classification problem, the goal of knowledge amalgamation is to learn a lightweight student model capable of handling the comprehensive classification. We assume no other annotations except the outputs from the teacher models are available, and thus focus on extracting and amalgamating knowledge from the multiple teachers. To this end, we propose a pilot two-step strategy to tackle the knowledge amalgamation task, by learning first the compact feature representations from teachers and then the network parameters in a layer-wise manner so as to build the student model. We apply this approach to four public datasets and obtain very encouraging results: even without any human annotation, the obtained student model is competent to handle the comprehensive classification task and in most cases outperforms the teachers in individual sub-tasks.

Chengchao Shen, Xinchao Wang, Jie Song, Li Sun, Mingli Song• 2018

Related benchmarks

TaskDatasetResultRank
Knowledge DistillationCIFAR-10 + CIFAR-100
Accuracy84.35
12
Knowledge DistillationCIFAR-10 + Tiny-ImageNet
Accuracy85.62
12
Knowledge DistillationCIFAR-100 + Tiny-ImageNet
Accuracy0.8131
12
Knowledge DistillationMNIST + FASHION-MNIST
Accuracy93.81
12
Showing 4 of 4 rows

Other info

Follow for update