Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

High-Fidelity Relightable Monocular Portrait Animation with Lighting-Controllable Video Diffusion Model

About

Relightable portrait animation aims to animate a static reference portrait to match the head movements and expressions of a driving video while adapting to user-specified or reference lighting conditions. Existing portrait animation methods fail to achieve relightable portraits because they do not separate and manipulate intrinsic (identity and appearance) and extrinsic (pose and lighting) features. In this paper, we present a Lighting Controllable Video Diffusion model (LCVD) for high-fidelity, relightable portrait animation. We address this limitation by distinguishing these feature types through dedicated subspaces within the feature space of a pre-trained image-to-video diffusion model. Specifically, we employ the 3D mesh, pose, and lighting-rendered shading hints of the portrait to represent the extrinsic attributes, while the reference represents the intrinsic attributes. In the training phase, we employ a reference adapter to map the reference into the intrinsic feature subspace and a shading adapter to map the shading hints into the extrinsic feature subspace. By merging features from these subspaces, the model achieves nuanced control over lighting, pose, and expression in generated animations. Extensive evaluations show that LCVD outperforms state-of-the-art methods in lighting realism, image quality, and video consistency, setting a new benchmark in relightable portrait animation.

Mingtao Guo, Guanyu Xing, Yanli Liu• 2025

Related benchmarks

TaskDatasetResultRank
Portrait Animation (Self-reenactment)VFHQ (test)
FVD470.8
23
Portrait Animation (Cross-reenactment)FFHQ source + VFHQ driving (test)
CSIM0.553
18
Self-reenactment portrait animationMEAD 59 (test)
CSIM0.8212
18
Portrait RelightingHDTF
LE0.738
6
Cross-identity portrait animationHDTF
ID0.876
4
Portrait RelightingFFHQ
LE0.938
4
Showing 6 of 6 rows

Other info

Follow for update