Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Deep Multimodal Subspace Clustering Networks

About

We present convolutional neural network (CNN) based approaches for unsupervised multimodal subspace clustering. The proposed framework consists of three main stages - multimodal encoder, self-expressive layer, and multimodal decoder. The encoder takes multimodal data as input and fuses them to a latent space representation. The self-expressive layer is responsible for enforcing the self-expressiveness property and acquiring an affinity matrix corresponding to the data points. The decoder reconstructs the original input data. The network uses the distance between the decoder's reconstruction and the original input in its training. We investigate early, late and intermediate fusion techniques and propose three different encoders corresponding to them for spatial fusion. The self-expressive layers and multimodal decoders are essentially the same for different spatial fusion-based approaches. In addition to various spatial fusion-based methods, an affinity fusion-based network is also proposed in which the self-expressive layer corresponding to different modalities is enforced to be the same. Extensive experiments on three datasets show that the proposed methods significantly outperform the state-of-the-art multimodal subspace clustering methods.

Mahdi Abavisani, Vishal M. Patel• 2018

Related benchmarks

TaskDatasetResultRank
ClusteringMNIST
NMI0.9209
92
Image ClusteringUSPS
NMI0.9209
43
ClusteringE-MNIST
Accuracy65.3
25
ClusteringRGB-D Object
NMI0.608
18
Multi-view Subspace ClusteringYale
NMI76.9
18
Multi-view Subspace ClusteringORL
NMI92.8
18
Multi-view Subspace ClusteringBBCSport
NMI81.3
18
Multi-view Subspace ClusteringStill DB
NMI16.8
18
ClusteringCCV
ACC18.3
15
Multimodal Subspace ClusteringARL
ACC98.34
14
Showing 10 of 14 rows

Other info

Code

Follow for update