Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VTON-HandFit: Virtual Try-on for Arbitrary Hand Pose Guided by Hand Priors Embedding

About

Although diffusion-based image virtual try-on has made considerable progress, emerging approaches still struggle to effectively address the issue of hand occlusion (i.e., clothing regions occluded by the hand part), leading to a notable degradation of the try-on performance. To tackle this issue widely existing in real-world scenarios, we propose VTON-HandFit, leveraging the power of hand priors to reconstruct the appearance and structure for hand occlusion cases. Firstly, we tailor a Handpose Aggregation Net using the ControlNet-based structure explicitly and adaptively encoding the global hand and pose priors. Besides, to fully exploit the hand-related structure and appearance information, we propose Hand-feature Disentanglement Embedding module to disentangle the hand priors into the hand structure-parametric and visual-appearance features, and customize a masked cross attention for further decoupled feature embedding. Lastly, we customize a hand-canny constraint loss to better learn the structure edge knowledge from the hand template of model image. VTON-HandFit outperforms the baselines in qualitative and quantitative evaluations on the public dataset and our self-collected hand-occlusion Handfit-3K dataset particularly for the arbitrary hand pose occlusion cases in real-world scenarios. The Code and dataset will be available at \url{https://github.com/VTON-HandFit/VTON-HandFit}.

Yujie Liang, Xiaobin Hu, Boyuan Jiang, Donghao Luo, Kai WU, Wenhui Han, Taisong Jin, Chengjie Wang• 2024

Related benchmarks

TaskDatasetResultRank
Virtual Try-OnVITON-HD 1.0 (test)
FID5.9564
27
Virtual Try-OnDressCode 1.0 (test)
FID2.8666
14
Virtual Try-OnHandfit-3K (test)
FID5.6027
12
Showing 3 of 3 rows

Other info

Code

Follow for update