Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A$^{2}$V-SLP: Alignment-Aware Variational Modeling for Disentangled Sign Language Production

About

Building upon recent structural disentanglement frameworks for sign language production, we propose A$^{2}$V-SLP, an alignment-aware variational framework that learns articulator-wise disentangled latent distributions rather than deterministic embeddings. A disentangled Variational Autoencoder (VAE) encodes ground-truth sign pose sequences and extracts articulator-specific mean and variance vectors, which are used as distributional supervision for training a non-autoregressive Transformer. Given text embeddings, the Transformer predicts both latent means and log-variances, while the VAE decoder reconstructs the final sign pose sequences through stochastic sampling at the decoding stage. This formulation maintains articulator-level representations by avoiding deterministic latent collapse through distributional latent modeling. In addition, we integrate a gloss attention mechanism to strengthen alignment between linguistic input and articulated motion. Experimental results show consistent gains over deterministic latent regression, achieving state-of-the-art back-translation performance and improved motion realism in a fully gloss-free setting.

S\"umeyye Meryem Ta\c{s}y\"urek, Enis M\"ucahid \.Iskender, Hacer Yalim Keles• 2026

Related benchmarks

TaskDatasetResultRank
Sign Language ProductionPHOENIX14T (test)
BLEU-413.31
20
Sign Language ProductionCSL-Daily (test)
DTW0.165
6
Showing 2 of 2 rows

Other info

Follow for update