Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

One-Shot Learning for Pose-Guided Person Image Synthesis in the Wild

About

Current Pose-Guided Person Image Synthesis (PGPIS) methods depend heavily on large amounts of labeled triplet data to train the generator in a supervised manner. However, they often falter when applied to in-the-wild samples, primarily due to the distribution gap between the training datasets and real-world test samples. While some researchers aim to enhance model generalizability through sophisticated training procedures, advanced architectures, or by creating more diverse datasets, we adopt the test-time fine-tuning paradigm to customize a pre-trained Text2Image (T2I) model. However, naively applying test-time tuning results in inconsistencies in facial identities and appearance attributes. To address this, we introduce a Visual Consistency Module (VCM), which enhances appearance consistency by combining the face, text, and image embedding. Our approach, named OnePoseTrans, requires only a single source image to generate high-quality pose transfer results, offering greater stability than state-of-the-art data-driven methods. For each test case, OnePoseTrans customizes a model in around 48 seconds with an NVIDIA V100 GPU.

Dongqi Fan, Tao Chen, Mingjie Wang, Rui Ma, Qiang Tang, Zili Yi, Qian Wang, Liang Chang• 2024

Related benchmarks

TaskDatasetResultRank
Human Avatar GenerationWPose out-of-domain (test)
PSNR17.1
8
Pose-conditioned avatar generationWPose (Out-of-Domain)
M-PSNR17.23
8
Human Avatar GenerationDeepFashion In-Domain (test)
PSNR13.12
8
Pose-conditioned avatar generationDeepFashion In-Domain
PSNR13.57
8
Showing 4 of 4 rows

Other info

Follow for update