Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Can Language Models Learn to Listen?

About

We present a framework for generating appropriate facial responses from a listener in dyadic social interactions based on the speaker's words. Given an input transcription of the speaker's words with their timestamps, our approach autoregressively predicts a response of a listener: a sequence of listener facial gestures, quantized using a VQ-VAE. Since gesture is a language component, we propose treating the quantized atomic motion elements as additional language token inputs to a transformer-based large language model. Initializing our transformer with the weights of a language model pre-trained only on text results in significantly higher quality listener responses than training a transformer from scratch. We show that our generated listener motion is fluent and reflective of language semantics through quantitative metrics and a qualitative user study. In our evaluation, we analyze the model's ability to utilize temporal and semantic aspects of spoken text. Project page: https://people.eecs.berkeley.edu/~evonne_ng/projects/text2listen/

Evonne Ng, Sanjay Subramanian, Dan Klein, Angjoo Kanazawa, Trevor Darrell, Shiry Ginosar• 2023

Related benchmarks

TaskDatasetResultRank
Group Motion GenerationDND GROUP GESTURE (test)
Root Error (mm)185.2
13
Facial Expression GenerationREALTALK
Variation0.0402
7
Facial Expression GenerationL2L trevor
Variation0.1189
7
Listener Response GenerationRealtalk 1.0 (user study)
Appropriateness Score2.7
4
Head Orientation PredictionDnD Group Gesture
MAE Head Orientation (deg)31.7
3
Social Cue Score PredictionDnD Group Gesture
Social Cue Error (User 1)35
3
Showing 6 of 6 rows

Other info

Follow for update