Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploring Object Relation in Mean Teacher for Cross-Domain Detection

About

Rendering synthetic data (e.g., 3D CAD-rendered images) to generate annotations for learning deep models in vision tasks has attracted increasing attention in recent years. However, simply applying the models learnt on synthetic images may lead to high generalization error on real images due to domain shift. To address this issue, recent progress in cross-domain recognition has featured the Mean Teacher, which directly simulates unsupervised domain adaptation as semi-supervised learning. The domain gap is thus naturally bridged with consistency regularization in a teacher-student scheme. In this work, we advance this Mean Teacher paradigm to be applicable for cross-domain detection. Specifically, we present Mean Teacher with Object Relations (MTOR) that novelly remolds Mean Teacher under the backbone of Faster R-CNN by integrating the object relations into the measure of consistency cost between teacher and student modules. Technically, MTOR firstly learns relational graphs that capture similarities between pairs of regions for teacher and student respectively. The whole architecture is then optimized with three consistency regularizations: 1) region-level consistency to align the region-level predictions between teacher and student, 2) inter-graph consistency for matching the graph structures between teacher and student, and 3) intra-graph consistency to enhance the similarity between regions of same class within the graph of student. Extensive experiments are conducted on the transfers across Cityscapes, Foggy Cityscapes, and SIM10k, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, we obtain a new record of single model: 22.8% of mAP on Syn2Real detection dataset.

Qi Cai, Yingwei Pan, Chong-Wah Ngo, Xinmei Tian, Lingyu Duan, Ting Yao• 2019

Related benchmarks

TaskDatasetResultRank
Object DetectionCityscapes to Foggy Cityscapes (test)
mAP35.1
196
Object DetectionFoggy Cityscapes (test)
mAP (Mean Average Precision)35.1
108
Object DetectionSim10K → Cityscapes (test)
AP (Car)46.6
104
Object DetectionCityscapes Adaptation from SIM-10k (val)
AP (Car)46.6
97
Object DetectionFoggy Cityscapes (val)
mAP35.1
67
Object DetectionCityscapes to Foggy Cityscapes (val)
mAP35.1
57
Object DetectionSim10k to Cityscapes
AP (Car)46.6
51
Object DetectionFoggy Cityscapes
mAP35.1
47
Object DetectionFoggyCityscapes 1.0 (val)
AP (person)30.6
42
Object DetectionCityscapes S -> C adaptation (val)
mAP46.6
37
Showing 10 of 14 rows

Other info

Follow for update