Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Decoupling Common and Unique Representations for Multimodal Self-supervised Learning

About

The increasing availability of multi-sensor data sparks wide interest in multimodal self-supervised learning. However, most existing approaches learn only common representations across modalities while ignoring intra-modal training and modality-unique representations. We propose Decoupling Common and Unique Representations (DeCUR), a simple yet effective method for multimodal self-supervised learning. By distinguishing inter- and intra-modal embeddings through multimodal redundancy reduction, DeCUR can integrate complementary information across different modalities. We evaluate DeCUR in three common multimodal scenarios (radar-optical, RGB-elevation, and RGB-depth), and demonstrate its consistent improvement regardless of architectures and for both multimodal and modality-missing settings. With thorough experiments and comprehensive analysis, we hope this work can provide valuable insights and raise more interest in researching the hidden relationships of multimodal representations.

Yi Wang, Conrad M Albrecht, Nassim Ait Ali Braham, Chenying Liu, Zhitong Xiong, Xiao Xiang Zhu• 2023

Related benchmarks

TaskDatasetResultRank
Segmentationm-chesapeake
Mean mIoU69.83
23
Field Boundary SegmentationFTW (test)
Pixel IoU49
19
Flood Inundation MappingSen1Flood11
mIoU86.87
15
Image Classificationm-forestnet (test)
Mean Accuracy55.9
13
Segmentationm-nz-cattle
Mean IoU83.04
13
Segmentationm-cashew-plant
Mean IoU84.15
13
Segmentationm-NeonTree
Mean mIoU57.47
13
Classificationm-so2sat (test)
Mean Accuracy56.68
13
Segmentationm-SA crop-type
Mean mIoU34.49
13
Classificationm-pv4ger (test)
Mean Accuracy97.38
13
Showing 10 of 13 rows

Other info

Follow for update