Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Hypergraph Vision Transformers: Images are More than Nodes, More than Edges

About

Recent advancements in computer vision have highlighted the scalability of Vision Transformers (ViTs) across various tasks, yet challenges remain in balancing adaptability, computational efficiency, and the ability to model higher-order relationships. Vision Graph Neural Networks (ViGs) offer an alternative by leveraging graph-based methodologies but are hindered by the computational bottlenecks of clustering algorithms used for edge generation. To address these issues, we propose the Hypergraph Vision Transformer (HgVT), which incorporates a hierarchical bipartite hypergraph structure into the vision transformer framework to capture higher-order semantic relationships while maintaining computational efficiency. HgVT leverages population and diversity regularization for dynamic hypergraph construction without clustering, and expert edge pooling to enhance semantic extraction and facilitate graph-based image retrieval. Empirical results demonstrate that HgVT achieves strong performance on image classification and retrieval, positioning it as an efficient framework for semantic-based vision tasks.

Joshua Fixelle• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet V2
Top-1 Acc70.1
487
Image ClassificationImageNet-ReaL
Precision@186.7
195
Image RetrievalRevisited Paris (RPar) (Hard)
mAP31.1
115
Image RetrievalRevisited Paris (RPar) (Medium)
mAP56.7
100
Image RetrievalImageNet-1K
mAP@1073.23
12
Image RetrievalROxford Hard revisited
mAP12.1
6
Image RetrievalROxford Revisited (Medium)
mAP28
6
Showing 7 of 7 rows

Other info

Follow for update