Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Know "No" Better: A Data-Driven Approach for Enhancing Negation Awareness in CLIP

About

While CLIP has significantly advanced multimodal understanding by bridging vision and language, the inability to grasp negation - such as failing to differentiate concepts like "parking" from "no parking" - poses substantial challenges. By analyzing the data used in the public CLIP model's pre-training, we posit this limitation stems from a lack of negation-inclusive data. To address this, we introduce data generation pipelines that employ a large language model (LLM) and a multimodal LLM to produce negation-inclusive captions. Fine-tuning CLIP with data generated from our pipelines, we develop NegationCLIP, which enhances negation awareness while preserving the generality. Moreover, to enable a comprehensive evaluation of negation understanding, we propose NegRefCOCOg-a benchmark tailored to test VLMs' ability to interpret negation across diverse expressions and positions within a sentence. Experiments on various CLIP architectures validate the effectiveness of our data generation pipelines in enhancing CLIP's ability to perceive negation accurately. Additionally, NegationCLIP's enhanced negation awareness has practical applications across various multimodal tasks, demonstrated by performance gains in text-to-image generation and referring image segmentation.

Junsung Park, Jungbeom Lee, Jongyoon Song, Sangwon Yu, Dahuin Jung, Sungroh Yoon• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Image RetrievalCOCO 2014 (test)
Accuracy56.78
12
Negation ComprehensionNegRefCOCOg (test)
Accuracy65
12
Negation ComprehensionCC-Neg (test)
Accuracy70.3
12
Multiple Choice QuestionNegBench COCO subset
Overall Accuracy10.21
4
Showing 4 of 4 rows

Other info

Follow for update