Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

UniGS: Unified Representation for Image Generation and Segmentation

About

This paper introduces a novel unified representation of diffusion models for image generation and segmentation. Specifically, we use a colormap to represent entity-level masks, addressing the challenge of varying entity numbers while aligning the representation closely with the image RGB domain. Two novel modules, including the location-aware color palette and progressive dichotomy module, are proposed to support our mask representation. On the one hand, a location-aware palette guarantees the colors' consistency to entities' locations. On the other hand, the progressive dichotomy module can efficiently decode the synthesized colormap to high-quality entity-level masks in a depth-first binary search without knowing the cluster numbers. To tackle the issue of lacking large-scale segmentation training data, we employ an inpainting pipeline and then improve the flexibility of diffusion models across various tasks, including inpainting, image synthesis, referring segmentation, and entity segmentation. Comprehensive experiments validate the efficiency of our approach, demonstrating comparable segmentation mask quality to state-of-the-art and adaptability to multiple tasks. The code will be released at \href{https://github.com/qqlu/Entity}{https://github.com/qqlu/Entity}.

Lu Qi, Lehan Yang, Weidong Guo, Yu Xu, Bo Du, Varun Jampani, Ming-Hsuan Yang• 2023

Related benchmarks

TaskDatasetResultRank
Image SynthesisCOCO single object (val)
FID15.272
4
Image SynthesisCOCO multiple object (val)
FID14.271
4
Entity SegmentationCOCO
mIoU63.1
4
Image InpaintingCOCO single object (val)
FID3.78
3
Image InpaintingCOCO multiple objects (val)
FID5.89
3
Referring SegmentationCOCO
mIoU80.8
2
Showing 6 of 6 rows

Other info

Code

Follow for update