Octopus: Alleviating Hallucination via Dynamic Contrastive Decoding
About
Large Vision-Language Models (LVLMs) have obtained impressive performance in visual content understanding and multi-modal reasoning. Unfortunately, these large models suffer from serious hallucination problems and tend to generate fabricated responses. Recently, several Contrastive Decoding (CD) strategies have been proposed to alleviate hallucination by introducing disturbed inputs. Although great progress has been made, these CD strategies mostly apply a one-size-fits-all approach for all input conditions. In this paper, we revisit this process through extensive experiments. Related results show that hallucination causes are hybrid and each generative step faces a unique hallucination challenge. Leveraging these meaningful insights, we introduce a simple yet effective Octopus-like framework that enables the model to adaptively identify hallucination types and create a dynamic CD workflow. Our Octopus framework not only outperforms existing methods across four benchmarks but also demonstrates excellent deployability and expansibility. Code is available at https://github.com/LijunZhang01/Octopus.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Hallucination Evaluation | MMHal-Bench | MMHal Score2.61 | 174 | |
| Generative Hallucination | Object-HalBench | CHAIR_S Score20.8 | 33 | |
| Generative Hallucination | AMBER Generative | CHAIR Score6.1 | 24 | |
| Discriminative Task | POPE MSCOCO (test) | Random Accuracy87.51 | 15 | |
| Discriminative Task | AMBER Discrimination 1.0 (test) | Accuracy76.7 | 10 |