Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LIA-X: Interpretable Latent Portrait Animator

About

We introduce LIA-X, a novel interpretable portrait animator designed to transfer facial dynamics from a driving video to a source portrait with fine-grained control. LIA-X is an autoencoder that models motion transfer as a linear navigation of motion codes in latent space. Crucially, it incorporates a novel Sparse Motion Dictionary that enables the model to disentangle facial dynamics into interpretable factors. Deviating from previous 'warp-render' approaches, the interpretability of the Sparse Motion Dictionary allows LIA-X to support a highly controllable 'edit-warp-render' strategy, enabling precise manipulation of fine-grained facial semantics in the source portrait. This helps to narrow initial differences with the driving video in terms of pose and expression. Moreover, we demonstrate the scalability of LIA-X by successfully training a large-scale model with approximately 1 billion parameters on extensive datasets. Experimental results show that our proposed method outperforms previous approaches in both self-reenactment and cross-reenactment tasks across several benchmarks. Additionally, the interpretable and controllable nature of LIA-X supports practical applications such as fine-grained, user-guided image and video editing, as well as 3D-aware portrait video manipulation.

Yaohui Wang, Di Yang, Xinyuan Chen, Francois Bremond, Yu Qiao, Antitza Dantcheva• 2025

Related benchmarks

TaskDatasetResultRank
Portrait Animation (Self-reenactment)VFHQ (test)
FVD317.5
23
Portrait Animation (Cross-reenactment)FFHQ source + VFHQ driving (test)
CSIM0.827
18
Self-reenactment portrait animationMEAD 59 (test)
CSIM0.8957
18
Showing 3 of 3 rows

Other info

Follow for update