Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Training-Free Dual Hyperbolic Adapters for Better Cross-Modal Reasoning

About

Recent research in Vision-Language Models (VLMs) has significantly advanced our capabilities in cross-modal reasoning. However, existing methods suffer from performance degradation with domain changes or require substantial computational resources for fine-tuning in new domains. To address this issue, we develop a new adaptation method for large vision-language models, called \textit{Training-free Dual Hyperbolic Adapters} (T-DHA). We characterize the vision-language relationship between semantic concepts, which typically has a hierarchical tree structure, in the hyperbolic space instead of the traditional Euclidean space. Hyperbolic spaces exhibit exponential volume growth with radius, unlike the polynomial growth in Euclidean space. We find that this unique property is particularly effective for embedding hierarchical data structures using the Poincar\'e ball model, achieving significantly improved representation and discrimination power. Coupled with negative learning, it provides more accurate and robust classifications with fewer feature dimensions. Our extensive experimental results on various datasets demonstrate that the T-DHA method significantly outperforms existing state-of-the-art methods in few-shot image recognition and domain generalization tasks.

Yi Zhang, Chun-Wun Cheng, Junyi He, Ke Yu, Yushun Tang, Carola-Bibiane Sch\"onlieb, Zhihai He, Angelica I. Aviles-Rivero• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationAverage 11 datasets--
52
Image ClassificationImageNet V2 (Target)
Accuracy57.11
42
Image ClassificationImageNet-Sketch (Target)
Accuracy37.92
30
Image ClassificationImageNet (source)
Accuracy64.85
23
ClassificationImageNet 16-shot
Accuracy64.85
5
Showing 5 of 5 rows

Other info

Follow for update