Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-Task Variational Information Bottleneck

About

Multi-task learning (MTL) is an important subject in machine learning and artificial intelligence. Its applications to computer vision, signal processing, and speech recognition are ubiquitous. Although this subject has attracted considerable attention recently, the performance and robustness of the existing models to different tasks have not been well balanced. This article proposes an MTL model based on the architecture of the variational information bottleneck (VIB), which can provide a more effective latent representation of the input features for the downstream tasks. Extensive observations on three public data sets under adversarial attacks show that the proposed model is competitive to the state-of-the-art algorithms concerning the prediction accuracy. Experimental results suggest that combining the VIB and the task-dependent uncertainties is a very effective way to abstract valid information from the input features for accomplishing multiple tasks.

Weizhu Qian, Bowei Chen, Yichao Zhang, Guanghui Wen, Franck Gechter• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationOffice-Home (test)
Mean Accuracy66.2
199
Image ClassificationOffice-Caltech (test)
Average Accuracy95
35
Image ClassificationImageCLEF (test)
Accuracy78.9
33
Showing 3 of 3 rows

Other info

Follow for update