Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MIBURI: Towards Expressive Interactive Gesture Synthesis

About

Embodied Conversational Agents (ECAs) aim to emulate human face-to-face interaction through speech, gestures, and facial expressions. Current large language model (LLM)-based conversational agents lack embodiment and the expressive gestures essential for natural interaction. Existing solutions for ECAs often produce rigid, low-diversity motions, that are unsuitable for human-like interaction. Alternatively, generative methods for co-speech gesture synthesis yield natural body gestures but depend on future speech context and require long run-times. To bridge this gap, we present MIBURI, the first online, causal framework for generating expressive full-body gestures and facial expressions synchronized with real-time spoken dialogue. We employ body-part aware gesture codecs that encode hierarchical motion details into multi-level discrete tokens. These tokens are then autoregressively generated by a two-dimensional causal framework conditioned on LLM-based speech-text embeddings, modeling both temporal dynamics and part-level motion hierarchy in real time. Further, we introduce auxiliary objectives to encourage expressive and diverse gestures while preventing convergence to static poses. Comparative evaluations demonstrate that our causal and real-time approach produces natural and contextually aligned gestures against recent baselines. We urge the reader to explore demo videos on https://vcai.mpi-inf.mpg.de/projects/MIBURI/.

M. Hamza Mughal, Rishabh Dabral, Vera Demberg, Christian Theobalt• 2026

Related benchmarks

TaskDatasetResultRank
Gesture SynthesisBEAT2 multi-speaker (23 speakers)
BeatAlign0.461
12
Gesture SynthesisBEAT2 Single-speaker (Scott)
BeatAlign0.79
9
Gesture SynthesisEmbody3D
BeatAlign0.605
4
Showing 3 of 3 rows

Other info

GitHub

Follow for update