TARS: MinMax Token-Adaptive Preference Strategy for Hallucination Reduction in MLLMs
About
Multimodal large language models (MLLMs) are prone to hallucinations, generating plausible but visually ungrounded outputs, partly because direct preference optimization (DPO) overfits to superficial linguistic cues under static preference supervision. We propose TARS, a token-adaptive preference strategy that reformulates DPO as a principled min-max optimization problem. The inner maximization selectively perturbs visual-agnostic tokens to induce worst-case distributional shifts, while the outer minimization enforces alignment with causal visual signals rather than surface-level patterns. A novel spectral alignment loss further regularizes hidden representations in the frequency domain via the Fast Fourier Transform (FFT), preserving global semantic structure without rigid token-level correspondence. We evaluate TARS across multiple hallucination benchmarks. Using only 4.8k preference samples without expert feedback, TARS reduces hallucination rates from 26.4\% to 13.2\% and cognition scores from 2.5 to 0.4, outperforming standard DPO by a large margin. Notably, TARS surpasses $5\times$ LLM-based data augmentation trained on 28.8k samples (Hal-Rate: 16.0\% vs.\ 13.2\%), demonstrating that reshaping the optimization landscape via adversarial token perturbation is fundamentally more effective than scaling training data. TARS further narrows the gap with GPT-4o on key metrics.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Hallucination Evaluation | POPE | -- | 1455 | |
| Hallucination Evaluation | AMBER | CHAIR4 | 172 | |
| Hallucination Evaluation | POPE | Accuracy87.4 | 153 | |
| Multimodal Understanding | LLaVA-Bench | Overall Score67.2 | 72 | |
| Hallucination Evaluation | MMHal | Score2.76 | 37 | |
| VQA Hallucination | MMHal | Score2.89 | 21 | |
| Captioning Hallucination | ObjHal | CRs14.9 | 21 | |
| Hallucination Evaluation | ObjHal | CRs Accuracy29.3 | 6 | |
| Multimodal Understanding | SEEDBench | SeedBench Accuracy38.7 | 3 |