You said that?
About
We present a method for generating a video of a talking face. The method takes as inputs: (i) still images of the target face, and (ii) an audio speech segment; and outputs a video of the target face lip synched with the audio. The method runs in real time and is applicable to faces and audio not seen at training time. To achieve this we propose an encoder-decoder CNN model that uses a joint embedding of the face and audio to generate synthesised talking face video frames. The model is trained on tens of hours of unlabelled videos. We also show results of re-dubbing videos using speech from a different person.
Joon Son Chung, Amir Jamaludin, Andrew Zisserman• 2017
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Talking Face Generation | LRW (test) | SSIM0.46 | 28 | |
| Talking Face Generation | LRS2 (test) | -- | 18 | |
| Talking Head Generation | LRW 38 | LSE-C1.762 | 6 | |
| Talking Head Generation | LRS2 35 | LSE-C1.587 | 6 | |
| Talking Head Generation | LRS3 37 | LSE-C1.681 | 6 |
Showing 5 of 5 rows