FG-CLIP: Fine-Grained Visual and Textual Alignment
About
Contrastive Language-Image Pre-training (CLIP) excels in multimodal tasks such as image-text retrieval and zero-shot classification but struggles with fine-grained understanding due to its focus on coarse-grained short captions. To address this, we propose Fine-Grained CLIP (FG-CLIP), which enhances fine-grained understanding through three key innovations. First, we leverage large multimodal models to generate 1.6 billion long caption-image pairs for capturing global-level semantic details. Second, a high-quality dataset is constructed with 12 million images and 40 million region-specific bounding boxes aligned with detailed captions to ensure precise, context-rich representations. Third, 10 million hard fine-grained negative samples are incorporated to improve the model's ability to distinguish subtle semantic differences. We construct a comprehensive dataset, termed FineHARD, by integrating high-quality region-specific annotations with hard fine-grained negative samples. Corresponding training methods are meticulously designed for these data. Extensive experiments demonstrate that FG-CLIP outperforms the original CLIP and other state-of-the-art methods across various downstream tasks, including fine-grained understanding, open-vocabulary object detection, image-text retrieval, and general multimodal benchmarks. These results highlight FG-CLIP's effectiveness in capturing fine-grained image details and improving overall model performance. The data, code, and models are available at https://github.com/360CVGroup/FG-CLIP.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Retrieval | Flickr30K | R@176.4 | 531 | |
| Text-to-Image Retrieval | Flickr30K 1K (test) | R@181.3 | 432 | |
| Text-to-Image Retrieval | MSCOCO 5K (test) | R@150.46 | 308 | |
| Object Detection | DOTA 1.0 (test) | -- | 256 | |
| Classification | OrganAMNIST | Accuracy47.9 | 125 | |
| Image-to-Text Retrieval | DCI | R@161.8 | 79 | |
| Text-to-Image Retrieval | DCI | R@160.6 | 79 | |
| Text-to-Image Retrieval | RSITMD (test) | R@115.84 | 77 | |
| Image-to-Text Retrieval | RSITMD (test) | R@116.15 | 77 | |
| Aggregate Model Performance | Combined Benchmark Suite | Average Score70.7 | 57 |