Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cross-Image Relational Knowledge Distillation for Semantic Segmentation

About

Current Knowledge Distillation (KD) methods for semantic segmentation often guide the student to mimic the teacher's structured information generated from individual data samples. However, they ignore the global semantic relations among pixels across various images that are valuable for KD. This paper proposes a novel Cross-Image Relational KD (CIRKD), which focuses on transferring structured pixel-to-pixel and pixel-to-region relations among the whole images. The motivation is that a good teacher network could construct a well-structured feature space in terms of global pixel dependencies. CIRKD makes the student mimic better structured semantic relations from the teacher, thus improving the segmentation performance. Experimental results over Cityscapes, CamVid and Pascal VOC datasets demonstrate the effectiveness of our proposed approach against state-of-the-art distillation methods. The code is available at https://github.com/winycg/CIRKD.

Chuanguang Yang, Helong Zhou, Zhulin An, Xue Jiang, Yongjun Xu, Qian Zhang• 2022

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU35.15
2731
Semantic segmentationCityscapes (test)
mIoU75.05
1145
Semantic segmentationCamVid (test)
mIoU68.65
411
Semantic segmentationPASCAL VOC (val)
mIoU74.78
338
Showing 4 of 4 rows

Other info

Code

Follow for update