Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LION: A Clifford Neural Paradigm for Multimodal-Attributed Graph Learning

About

Recently, the rapid advancement of multimodal domains has driven a data-centric paradigm shift in graph ML, transitioning from text-attributed to multimodal-attributed graphs. This advancement significantly enhances data representation and expands the scope of graph downstream tasks, such as modality-oriented tasks, thereby improving the practical utility of graph ML. Despite its promise, limitations exist in the current neural paradigms: (1) Neglect Context in Modality Alignment: Most existing methods adopt topology-constrained or modality-specific operators as tokenizers. These aligners inevitably neglect graph context and inhibit modality interaction, resulting in suboptimal alignment. (2) Lack of Adaptation in Modality Fusion: Most existing methods are simple adaptations for 2-modality graphs and fail to adequately exploit aligned tokens equipped with topology priors during fusion, leading to poor generalizability and performance degradation. To address the above issues, we propose LION (c\underline{LI}ff\underline{O}rd \underline{N}eural paradigm) based on the Clifford algebra and decoupled graph neural paradigm (i.e., propagation-then-aggregation) to implement alignment-then-fusion in multimodal-attributed graphs. Specifically, we first construct a modality-aware geometric manifold grounded in Clifford algebra. This geometric-induced high-order graph propagation efficiently achieves modality interaction, facilitating modality alignment. Then, based on the geometric grade properties of aligned tokens, we propose adaptive holographic aggregation. This module integrates the energy and scale of geometric grades with learnable parameters to improve modality fusion. Extensive experiments on 9 datasets demonstrate that LION significantly outperforms SOTA baselines across 3 graph and 3 modality downstream tasks.

Xunkai Li, Zhengyu Wu, Zekai Chen, Henan Sun, Daohan Su, Guang Zeng, Hongchao Qin, Rong-Hua Li, Guoren Wang• 2026

Related benchmarks

TaskDatasetResultRank
Graph-to-ImageSemArt
CLIP-S Score74.21
14
Graph-to-TextFlickr30K
BLEU-411.54
14
Link PredictionCloth
MRR58.47
14
Modal RetrievalEle-fashion
MRR94.67
14
Node ClassificationMovies
Accuracy58.61
14
Node ClassificationGoodreads
Accuracy78.54
14
Node ClusteringRedditS
NMI90.53
14
Node ClusteringGrocery
NMI58.54
14
Showing 8 of 8 rows

Other info

Follow for update