Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Dense Siamese Network for Dense Unsupervised Learning

About

This paper presents Dense Siamese Network (DenseSiam), a simple unsupervised learning framework for dense prediction tasks. It learns visual representations by maximizing the similarity between two views of one image with two types of consistency, i.e., pixel consistency and region consistency. Concretely, DenseSiam first maximizes the pixel level spatial consistency according to the exact location correspondence in the overlapped area. It also extracts a batch of region embeddings that correspond to some sub-regions in the overlapped area to be contrasted for region consistency. In contrast to previous methods that require negative pixel pairs, momentum encoders or heuristic masks, DenseSiam benefits from the simple Siamese network and optimizes the consistency of different granularities. It also proves that the simple location correspondence and interacted region embeddings are effective enough to learn the similarity. We apply DenseSiam on ImageNet and obtain competitive improvements on various downstream tasks. We also show that only with some extra task-specific losses, the simple framework can directly conduct dense prediction tasks. On an existing unsupervised semantic segmentation benchmark, it surpasses state-of-the-art segmentation methods by 2.1 mIoU with 28% training costs. Code and models are released at https://github.com/ZwwWayne/DenseSiam.

Wenwei Zhang, Jiangmiao Pang, Kai Chen, Chen Change Loy• 2022

Related benchmarks

TaskDatasetResultRank
Object DetectionCOCO 2017 (val)
AP40.8
2454
Instance SegmentationCOCO 2017 (val)--
1144
Semantic segmentationCityscapes
mIoU77
578
Object DetectionVOC 2007 (test)
AP@5082.9
52
Unsupervised Semantic SegmentationCOCO Curated (test)
mIoU0.164
4
Showing 5 of 5 rows

Other info

Code

Follow for update