Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Gated Relational Alignment via Confidence-based Distillation for Efficient VLMs

About

Vision-Language Models (VLMs) achieve strong multimodal performance but are costly to deploy, and post-training quantization often causes significant accuracy loss. Despite its potential, quantization-aware training for VLMs remains underexplored. We propose GRACE, a framework unifying knowledge distillation and QAT under the Information Bottleneck principle: quantization constrains information capacity while distillation guides what to preserve within this budget. Treating the teacher as a proxy for task-relevant information, we introduce confidence-gated decoupled distillation to filter unreliable supervision, relational centered kernel alignment to transfer visual token structures, and an adaptive controller via Lagrangian relaxation to balance fidelity against capacity constraints. Across extensive benchmarks on LLaVA and Qwen families, our INT4 models consistently outperform FP16 baselines (e.g., LLaVA-1.5-7B: 70.1 vs. 66.8 on SQA; Qwen2-VL-2B: 76.9 vs. 72.6 on MMBench), nearly matching teacher performance. Using real INT4 kernel, we achieve 3$\times$ throughput with 54% memory reduction. This principled framework significantly outperforms existing quantization methods, making GRACE a compelling solution for resource-constrained deployment.

Yanlong Chen, Amirhossein Habibian, Luca Benini, Yawei Li• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy79.2
1165
Visual Question AnsweringVizWiz
Accuracy52.5
1043
Object Hallucination EvaluationPOPE
Accuracy85.9
935
Text-based Visual Question AnsweringTextVQA
Accuracy60.4
496
Science Question AnsweringScienceQA
Accuracy71.3
229
Multimodal EvaluationMM-Bench
Accuracy66.1
57
Vision-Language UnderstandingVision-Language Evaluation Suite MMB, MMStar, MMMU, Hallusion, AI2D, OCR, SEED, SQA (test val)
MMB Score77.9
10
Showing 7 of 7 rows

Other info

Follow for update