Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BiPrompt: Bilateral Prompt Optimization for Visual and Textual Debiasing in Vision-Language Models

About

Vision language foundation models such as CLIP exhibit impressive zero-shot generalization yet remain vulnerable to spurious correlations across visual and textual modalities. Existing debiasing approaches often address a single modality either visual or textual leading to partial robustness and unstable adaptation under distribution shifts. We propose a bilateral prompt optimization framework (BiPrompt) that simultaneously mitigates non-causal feature reliance in both modalities during test-time adaptation. On the visual side, it employs structured attention-guided erasure to suppress background activations and enforce orthogonal prediction consistency between causal and spurious regions. On the textual side, it introduces balanced prompt normalization, a learnable re-centering mechanism that aligns class embeddings toward an isotropic semantic space. Together, these modules jointly minimize conditional mutual information between spurious cues and predictions, steering the model toward causal, domain invariant reasoning without retraining or domain supervision. Extensive evaluations on real-world and synthetic bias benchmarks demonstrate consistent improvements in both average and worst-group accuracies over prior test-time debiasing methods, establishing a lightweight yet effective path toward trustworthy and causally grounded vision-language adaptation.

Sunny Gupta, Shounak Das, Amit Sethi• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationWaterbirds
WG Accuracy66.6
79
ClassificationImageNet-A OOD
Top-1 Accuracy52.2
8
ClassificationCUB-200 (OOD)
Top-1 Accuracy31
8
ClassificationTiny-ImageNet (OOD)
Top-1 Accuracy44.1
8
Image ClassificationCamelDeer
Avg Accuracy97.2
7
Image ClassificationSpiderCrab
Avg Accuracy97.4
7
Showing 6 of 6 rows

Other info

Follow for update