Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Generating Enhanced Negatives for Training Language-Based Object Detectors

About

The recent progress in language-based open-vocabulary object detection can be largely attributed to finding better ways of leveraging large-scale data with free-form text annotations. Training such models with a discriminative objective function has proven successful, but requires good positive and negative samples. However, the free-form nature and the open vocabulary of object descriptions make the space of negatives extremely large. Prior works randomly sample negatives or use rule-based techniques to build them. In contrast, we propose to leverage the vast knowledge built into modern generative models to automatically build negatives that are more relevant to the original data. Specifically, we use large-language-models to generate negative text descriptions, and text-to-image diffusion models to also generate corresponding negative images. Our experimental analysis confirms the relevance of the generated negative data, and its use in language-based detectors improves performance on two complex benchmarks. Code is available at \url{https://github.com/xiaofeng94/Gen-Enhanced-Negs}.

Shiyu Zhao, Long Zhao, Vijay Kumar B.G, Yumin Suh, Dimitris N. Metaxas, Manmohan Chandraker, Samuel Schulter• 2023

Related benchmarks

TaskDatasetResultRank
Object DetectionD3
Full Score26
35
Object DetectionOmniLabel
AP (Description)25.1
19
Described Object DetectionD3 (Abs)
mAP28.1
16
Described Object DetectionD3 (Full)
mAP26
16
Described Object DetectionD3 (Pres)
mAP25.2
16
Described Object DetectionD3 (S)
mAP35.5
14
Described Object DetectionD3 (M)
mAP29.7
14
Described Object DetectionD3 L
mAP20.5
14
Described Object DetectionD3 XL
mAP14.2
14
Showing 9 of 9 rows

Other info

Follow for update