Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Deep Multimodal Feature Representation with Asymmetric Multi-layer Fusion

About

We propose a compact and effective framework to fuse multimodal features at multiple layers in a single network. The framework consists of two innovative fusion schemes. Firstly, unlike existing multimodal methods that necessitate individual encoders for different modalities, we verify that multimodal features can be learnt within a shared single network by merely maintaining modality-specific batch normalization layers in the encoder, which also enables implicit fusion via joint feature representation learning. Secondly, we propose a bidirectional multi-layer fusion scheme, where multimodal features can be exploited progressively. To take advantage of such scheme, we introduce two asymmetric fusion operations including channel shuffle and pixel shift, which learn different fused features with respect to different fusion directions. These two operations are parameter-free and strengthen the multimodal feature interactions across channels as well as enhance the spatial feature discrimination within channels. We conduct extensive experiments on semantic segmentation and image translation tasks, based on three publicly available datasets covering diverse modalities. Results indicate that our proposed framework is general, compact and is superior to state-of-the-art fusion frameworks.

Yikai Wang, Fuchun Sun, Ming Lu, Anbang Yao• 2021

Related benchmarks

TaskDatasetResultRank
Semantic segmentationNYUD v2 (test)
mIoU51.2
187
Semantic segmentationNYUD v2
mIoU51.2
96
Semantic segmentationCityscapes (val)
mIoU82.1
11
Image TranslationTaskonomy Shade Depth
FID82.5
5
Image TranslationTaskonomy Normal Texture
FID77.8
5
Image TranslationTaskonomy Depth, Texture, Normal
FID75.1
5
Image TranslationTaskonomy Shade Normal Edge
FID79.4
5
Showing 7 of 7 rows

Other info

Follow for update