MoEGCL: Mixture of Ego-Graphs Contrastive Representation Learning for Multi-View Clustering
About
In recent years, the advancement of Graph Neural Networks (GNNs) has significantly propelled progress in Multi-View Clustering (MVC). However, existing methods face the problem of coarse-grained graph fusion. Specifically, current approaches typically generate a separate graph structure for each view and then perform weighted fusion of graph structures at the view level, which is a relatively rough strategy. To address this limitation, we present a novel Mixture of Ego-Graphs Contrastive Representation Learning (MoEGCL). It mainly consists of two modules. In particular, we propose an innovative Mixture of Ego-Graphs Fusion (MoEGF), which constructs ego graphs and utilizes a Mixture-of-Experts network to implement fine-grained fusion of ego graphs at the sample level, rather than the conventional view-level fusion. Additionally, we present the Ego Graph Contrastive Learning (EGCL) module to align the fused representation with the view-specific representation. The EGCL module enhances the representation similarity of samples from the same cluster, not merely from the same sample, further boosting fine-grained graph representation. Extensive experiments demonstrate that MoEGCL achieves state-of-the-art results in deep multi-view clustering tasks. The source code is publicly available at https://github.com/HackerHyper/MoEGCL.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-view Clustering | Caltech-5V | ACC82.07 | 15 | |
| Multi-view Clustering | RGBD | Accuracy (ACC)48.86 | 9 | |
| Multi-view Clustering | LandUse | ACC33.81 | 9 | |
| Multi-view Clustering | MNIST | Accuracy99.2 | 9 | |
| Multi-view Clustering | LGG | Accuracy74.91 | 9 | |
| Multi-view Clustering | WebKB | Accuracy (ACC)95.15 | 9 |