Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Countering Multi-modal Representation Collapse through Rank-targeted Fusion

About

Multi-modal fusion methods often suffer from two types of representation collapse: feature collapse where individual dimensions lose their discriminative power (as measured by eigenspectra), and modality collapse where one dominant modality overwhelms the other. Applications like human action anticipation that require fusing multifarious sensor data are hindered by both feature and modality collapse. However, existing methods attempt to counter feature collapse and modality collapse separately. This is because there is no unifying framework that efficiently addresses feature and modality collapse in conjunction. In this paper, we posit the utility of effective rank as an informative measure that can be utilized to quantify and counter both the representation collapses. We propose \textit{Rank-enhancing Token Fuser}, a theoretically grounded fusion framework that selectively blends less informative features from one modality with complementary features from another modality. We show that our method increases the effective rank of the fused representation. To address modality collapse, we evaluate modality combinations that mutually increase each others' effective rank. We show that depth maintains representational balance when fused with RGB, avoiding modality collapse. We validate our method on action anticipation, where we present \texttt{R3D}, a depth-informed fusion framework. Extensive experiments on NTURGBD, UTKinect, and DARai demonstrate that our approach significantly outperforms prior state-of-the-art methods by up to 3.74\%. Our code is available at: \href{https://github.com/olivesgatech/R3D}{https://github.com/olivesgatech/R3D}.

Seulgi Kim, Kiran Kokilepersaud, Mohit Prabhushankar, Ghassan AlRegib• 2025

Related benchmarks

TaskDatasetResultRank
Action AnticipationDARai
Anticipation Accuracy35.02
64
Action AnticipationDARai (Coarse)
MoC Accuracy46.29
64
Action AnticipationDARai Fine-grained
MoC Accuracy0.3257
56
Action AnticipationNTURGBD
MoC Accuracy23.17
56
Action AnticipationUTKinects
MoC Accuracy38.96
56
Action SegmentationDARai (Coarse)
Accuracy33.32
2
Action SegmentationDARai Fine-grained
Accuracy20.79
2
Action SegmentationNTURGBD
Accuracy0.1326
2
Showing 8 of 8 rows

Other info

Follow for update