Revis: Sparse Latent Steering to Mitigate Object Hallucination in Large Vision-Language Models
About
Despite the advanced capabilities of Large Vision-Language Models (LVLMs), they frequently suffer from object hallucination. One reason is that visual features and pretrained textual representations often become intertwined in the deeper network layers. To address this, we propose REVIS, a training-free framework designed to explicitly re-activate this suppressed visual information. Rooted in latent space geometry, REVIS extracts the pure visual information vector via orthogonal projection and employs a calibrated strategy to perform sparse intervention only at the precise depth where suppression occurs. This surgical approach effectively restores visual information with minimal computational cost. Empirical evaluations on standard benchmarks demonstrate that REVIS reduces object hallucination rates by approximately 19% compared to state-of-the-art baselines, while preserving general reasoning capabilities.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Hallucination Evaluation | POPE | -- | 935 | |
| Multimodal Capability Evaluation | MM-Vet | Score47.48 | 282 | |
| Object Hallucination | POPE (Random) | F1 Score91.43 | 200 | |
| Object Hallucination | POPE Adversarial | Accuracy87.8 | 196 | |
| Object Hallucination | POPE Popular | F1 Score89.91 | 188 | |
| Hallucination Evaluation | CHAIR | CHAIR_s30 | 166 | |
| Vision-Language Understanding | MM-Vet | Total Score72.16 | 43 | |
| Large Multi-modal Model Evaluation | MME | Perception Score1.51e+3 | 14 | |
| Vision-Language Evaluation | MME (full) | Perception Score1.72e+3 | 7 |