SOLAMI: Social Vision-Language-Action Modeling for Immersive Interaction with 3D Autonomous Characters
About
Human beings are social animals. How to equip 3D autonomous characters with similar social intelligence that can perceive, understand and interact with humans remains an open yet foundamental problem. In this paper, we introduce SOLAMI, the first end-to-end Social vision-Language-Action (VLA) Modeling framework for Immersive interaction with 3D autonomous characters. Specifically, SOLAMI builds 3D autonomous characters from three aspects: (1) Social VLA Architecture: We propose a unified social VLA framework to generate multimodal response (speech and motion) based on the user's multimodal input to drive the character for social interaction. (2) Interactive Multimodal Data: We present SynMSI, a synthetic multimodal social interaction dataset generated by an automatic pipeline using only existing motion datasets to address the issue of data scarcity. (3) Immersive VR Interface: We develop a VR interface that enables users to immersively interact with these characters driven by various architectures. Extensive quantitative experiments and user studies demonstrate that our framework leads to more precise and natural character responses (in both speech and motion) that align with user expectations with lower latency.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multimodal Dialogue Generation | SynMSI (test) | Context Relevance4.838 | 13 | |
| Group Motion Generation | DND GROUP GESTURE (test) | Root Error (mm)188.6 | 13 | |
| Speaking State Prediction | DND GROUP GESTURE (test) | AP50 | 10 | |
| Speech Generation | DND GROUP GESTURE (test) | BERT Score0.428 | 10 | |
| Head Orientation Prediction | DnD Group Gesture | MAE Head Orientation (deg)27.46 | 3 | |
| Social Cue Score Prediction | DnD Group Gesture | Social Cue Error (User 1)32 | 3 | |
| Reaction Generation | DnD Group Gesture | Motion Coherence2.7 | 2 |