PolySLGen: Online Multimodal Speaking-Listening Reaction Generation in Polyadic Interaction
About
Human-like multimodal reaction generation is essential for natural group interactions between humans and embodied AI. However, existing approaches are limited to single-modality or speaking-only responses in dyadic interactions, making them unsuitable for realistic social scenarios. Many also overlook nonverbal cues and complex dynamics of polyadic interactions, both critical for engagement and conversational coherence. In this work, we present PolySLGen, an online framework for Polyadic multimodal Speaking and Listening reaction Generation. Given past conversation and motion from all participants, PolySLGen generates a future speaking or listening reaction for a target participant, including speech, body motion, and speaking state score. To model group interactions effectively, we propose a pose fusion module and a social cue encoder that jointly aggregate motion and social signals from the group. Extensive experiments, along with quantitative and qualitative evaluations, show that PolySLGen produces contextually appropriate and temporally coherent multi-modal reactions, outperforming several adapted and state-of-the-art baselines in motion quality, motion-speech alignment, speaking state prediction, and human-perceived realism.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Group Motion Generation | DND GROUP GESTURE (test) | Root Error (mm)108.7 | 13 | |
| Speaking State Prediction | DND GROUP GESTURE (test) | AP67 | 10 | |
| Speech Generation | DND GROUP GESTURE (test) | BERT Score0.508 | 10 | |
| Head Orientation Prediction | DnD Group Gesture | MAE Head Orientation (deg)26.46 | 3 | |
| Social Cue Score Prediction | DnD Group Gesture | Social Cue Error (User 1)30 | 3 | |
| Reaction Generation | DnD Group Gesture | Motion Coherence3.6 | 2 |