One-Shot Learning for Pose-Guided Person Image Synthesis in the Wild
About
Current Pose-Guided Person Image Synthesis (PGPIS) methods depend heavily on large amounts of labeled triplet data to train the generator in a supervised manner. However, they often falter when applied to in-the-wild samples, primarily due to the distribution gap between the training datasets and real-world test samples. While some researchers aim to enhance model generalizability through sophisticated training procedures, advanced architectures, or by creating more diverse datasets, we adopt the test-time fine-tuning paradigm to customize a pre-trained Text2Image (T2I) model. However, naively applying test-time tuning results in inconsistencies in facial identities and appearance attributes. To address this, we introduce a Visual Consistency Module (VCM), which enhances appearance consistency by combining the face, text, and image embedding. Our approach, named OnePoseTrans, requires only a single source image to generate high-quality pose transfer results, offering greater stability than state-of-the-art data-driven methods. For each test case, OnePoseTrans customizes a model in around 48 seconds with an NVIDIA V100 GPU.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Human Avatar Generation | WPose out-of-domain (test) | PSNR17.1 | 8 | |
| Pose-conditioned avatar generation | WPose (Out-of-Domain) | M-PSNR17.23 | 8 | |
| Human Avatar Generation | DeepFashion In-Domain (test) | PSNR13.12 | 8 | |
| Pose-conditioned avatar generation | DeepFashion In-Domain | PSNR13.57 | 8 |