LLaVA-FA: Learning Fourier Approximation for Compressing Large Multimodal Models
About
Large multimodal models (LMMs) have achieved impressive performance on various vision-language tasks, but their substantial computational and memory costs hinder their practical deployment. Existing compression methods often decouple low-rank decomposition and quantization, leading to compounded reconstruction errors, especially in multimodal architectures with cross-modal redundancy. To address this issue, we propose LLaVA-FA, a novel efficient LMM that performs joint low-rank plus quantization approximation in the frequency domain. By leveraging the de-correlation and conjugate symmetry properties of Fourier transform, LLaVA-FA achieves more compact and accurate weight representations. Furthermore, we introduce PolarQuant, a polar-coordinate quantization method tailored for complex matrices, and an optional diagonal calibration (ODC) scheme that eliminates the need for large-scale calibration data. Extensive experimental results demonstrate that our proposed LLaVA-FA outperforms existing efficient multimodal models across multiple benchmarks while maintaining minimal activated parameters and low computational costs, validating its effectiveness as a powerful solution for compressing LMMs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | VizWiz | Accuracy62.5 | 1043 | |
| Visual Question Answering | GQA | Accuracy68.5 | 374 | |
| Visual Question Answering | TextVQA (val) | VQA Score68 | 309 | |
| Hallucination Evaluation | MMHal-Bench | MMHal Score2.79 | 174 | |
| Hallucination Evaluation | POPE | -- | 132 | |
| Visual Question Answering | ScienceQA (test) | Accuracy77 | 95 | |
| Multimodal Reasoning | MMBench (dev) | Accuracy74.5 | 47 | |
| Hallucination Evaluation | Object-HalBench | -- | 28 | |
| Multimodal Understanding and Reasoning | MME | MME Score74.5 | 26 | |
| Multimodal Understanding and Reasoning | MMBench Chinese (dev) | Accuracy69.5 | 22 |