Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PolySLGen: Online Multimodal Speaking-Listening Reaction Generation in Polyadic Interaction

About

Human-like multimodal reaction generation is essential for natural group interactions between humans and embodied AI. However, existing approaches are limited to single-modality or speaking-only responses in dyadic interactions, making them unsuitable for realistic social scenarios. Many also overlook nonverbal cues and complex dynamics of polyadic interactions, both critical for engagement and conversational coherence. In this work, we present PolySLGen, an online framework for Polyadic multimodal Speaking and Listening reaction Generation. Given past conversation and motion from all participants, PolySLGen generates a future speaking or listening reaction for a target participant, including speech, body motion, and speaking state score. To model group interactions effectively, we propose a pose fusion module and a social cue encoder that jointly aggregate motion and social signals from the group. Extensive experiments, along with quantitative and qualitative evaluations, show that PolySLGen produces contextually appropriate and temporally coherent multi-modal reactions, outperforming several adapted and state-of-the-art baselines in motion quality, motion-speech alignment, speaking state prediction, and human-perceived realism.

Zhi-Yi Lin, Thomas Markhorst, Jouh Yeong Chew, Xucong Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Group Motion GenerationDND GROUP GESTURE (test)
Root Error (mm)108.7
13
Speaking State PredictionDND GROUP GESTURE (test)
AP67
10
Speech GenerationDND GROUP GESTURE (test)
BERT Score0.508
10
Head Orientation PredictionDnD Group Gesture
MAE Head Orientation (deg)26.46
3
Social Cue Score PredictionDnD Group Gesture
Social Cue Error (User 1)30
3
Reaction GenerationDnD Group Gesture
Motion Coherence3.6
2
Showing 6 of 6 rows

Other info

Follow for update