Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LLaVA-FA: Learning Fourier Approximation for Compressing Large Multimodal Models

About

Large multimodal models (LMMs) have achieved impressive performance on various vision-language tasks, but their substantial computational and memory costs hinder their practical deployment. Existing compression methods often decouple low-rank decomposition and quantization, leading to compounded reconstruction errors, especially in multimodal architectures with cross-modal redundancy. To address this issue, we propose LLaVA-FA, a novel efficient LMM that performs joint low-rank plus quantization approximation in the frequency domain. By leveraging the de-correlation and conjugate symmetry properties of Fourier transform, LLaVA-FA achieves more compact and accurate weight representations. Furthermore, we introduce PolarQuant, a polar-coordinate quantization method tailored for complex matrices, and an optional diagonal calibration (ODC) scheme that eliminates the need for large-scale calibration data. Extensive experimental results demonstrate that our proposed LLaVA-FA outperforms existing efficient multimodal models across multiple benchmarks while maintaining minimal activated parameters and low computational costs, validating its effectiveness as a powerful solution for compressing LMMs.

Pengcheng Zheng, Chaoning Zhang, Jiarong Mo, GuoHui Li, Jiaquan Zhang, Jiahao Zhang, Sihan Cao, Sheng Zheng, Caiyan Qin, Guoqing Wang, Yang Yang• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy62.5
1043
Visual Question AnsweringGQA
Accuracy68.5
374
Visual Question AnsweringTextVQA (val)
VQA Score68
309
Hallucination EvaluationMMHal-Bench
MMHal Score2.79
174
Hallucination EvaluationPOPE--
132
Visual Question AnsweringScienceQA (test)
Accuracy77
95
Multimodal ReasoningMMBench (dev)
Accuracy74.5
47
Hallucination EvaluationObject-HalBench--
28
Multimodal Understanding and ReasoningMME
MME Score74.5
26
Multimodal Understanding and ReasoningMMBench Chinese (dev)
Accuracy69.5
22
Showing 10 of 11 rows

Other info

Follow for update