Test-Time Adaptation for Tactile-Vision-Language Models
About
Tactile-vision-language (TVL) models are increasingly deployed in real-world robotic and multimodal perception tasks, where test-time distribution shifts are unavoidable. Existing test-time adaptation (TTA) methods provide filtering in unimodal settings but lack explicit treatment of modality-wise reliability under asynchronous cross-modal shifts, leaving them brittle when some modalities become unreliable. We study TTA for TVL models under such shifts and propose a reliability-aware framework that estimates per-modality reliability from prediction uncertainty and perturbation-based responses. This shared reliability signal is used to (i) filter unreliable test samples, (ii) adaptively fuse tactile, visual, and language features, and (iii) regularize test-time optimization with a reliability-guided objective. On the TAG-C benchmark and additional TVL scenarios, our approach consistently outperforms strong TTA baselines, achieving accuracy gains of up to 49.9\% under severe modality corruptions, underscoring the importance of explicit modality-wise reliability modeling for robust test-time adaptation.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Classification | TAG-C corrupted visual modality v1 based on ImageNet-C corruptions (test) | Brightness Accuracy (TAG-C)61.6 | 6 | |
| Classification | TAG-C corrupted visual modality | Top-1 Accuracy62 | 6 | |
| Tactile Recognition | TAG-C tactile modality, continuous cross-domain setting (test) | Brittleness Score67.6 | 6 |