Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Weak-to-Strong Compositional Learning from Generative Models for Language-based Object Detection

About

Vision-language (VL) models often exhibit a limited understanding of complex expressions of visual objects (e.g., attributes, shapes, and their relations), given complex and diverse language queries. Traditional approaches attempt to improve VL models using hard negative synthetic text, but their effectiveness is limited. In this paper, we harness the exceptional compositional understanding capabilities of generative foundational models. We introduce a novel method for structured synthetic data generation aimed at enhancing the compositional understanding of VL models in language-based object detection. Our framework generates densely paired positive and negative triplets (image, text descriptions, and bounding boxes) in both image and text domains. By leveraging these synthetic triplets, we transform 'weaker' VL models into 'stronger' models in terms of compositional understanding, a process we call "Weak-to-Strong Compositional Learning" (WSCL). To achieve this, we propose a new compositional contrastive learning formulation that discovers semantics and structures in complex descriptions from synthetic triplets. As a result, VL models trained with our synthetic data generation exhibit a significant performance boost in the Omnilabel benchmark by up to +5AP and the D3 benchmark by +6.9AP upon existing baselines.

Kwanyong Park, Kuniaki Saito, Donghyun Kim• 2024

Related benchmarks

TaskDatasetResultRank
Object DetectionD3
Full Score30.8
35
Described Object DetectionD3 (Full)
mAP26.5
16
Described Object DetectionD3 (Pres)
mAP26
16
Described Object DetectionD3 (Abs)
mAP27.7
16
Showing 4 of 4 rows

Other info

Follow for update