Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pixel Codec Avatars

About

Telecommunication with photorealistic avatars in virtual or augmented reality is a promising path for achieving authentic face-to-face communication in 3D over remote physical distances. In this work, we present the Pixel Codec Avatars (PiCA): a deep generative model of 3D human faces that achieves state of the art reconstruction performance while being computationally efficient and adaptive to the rendering conditions during execution. Our model combines two core ideas: (1) a fully convolutional architecture for decoding spatially varying features, and (2) a rendering-adaptive per-pixel decoder. Both techniques are integrated via a dense surface representation that is learned in a weakly-supervised manner from low-topology mesh tracking over training images. We demonstrate that PiCA improves reconstruction over existing techniques across testing expressions and views on persons of different gender and skin tone. Importantly, we show that the PiCA model is much smaller than the state-of-art baseline model, and makes multi-person telecommunicaiton possible: on a single Oculus Quest 2 mobile VR headset, 5 avatars are rendered in realtime in the same scene.

Shugao Ma, Tomas Simon, Jason Saragih, Dawei Wang, Yuecheng Li, Fernando De La Torre, Yaser Sheikh• 2021

Related benchmarks

TaskDatasetResultRank
Avatar Reconstructionfull body dataset
MAE2.84
4
Facial Image RenderingSubject 1 (test)
Front MSE21.39
4
Facial Image RenderingSubject 2 (test)
Front MSE18.31
4
Facial Image RenderingSubject 3 (test)
Front MSE23.11
4
Facial Image RenderingSubject 4 (test)
Front MSE6.81
4
Facial Image RenderingSubject 5 (test)
Front MSE8.74
4
Facial Image RenderingSubject 6 (test)
MSE (Front)6.22
4
3D Avatar RenderingQuest 3 Mobile Benchmark (test)
LPIPS0.4
4
Facial RenderingMultiface (test)
MSE (Subject 1)34.5
4
Head Avatar ReconstructionFace Dataset (Subject 3)
MAE6.59
4
Showing 10 of 14 rows

Other info

Follow for update