Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

N\"UWA: Visual Synthesis Pre-training for Neural visUal World creAtion

About

This paper presents a unified multimodal pre-trained model called N\"UWA that can generate new or manipulate existing visual data (i.e., images and videos) for various visual synthesis tasks. To cover language, image, and video at the same time for different scenarios, a 3D transformer encoder-decoder framework is designed, which can not only deal with videos as 3D data but also adapt to texts and images as 1D and 2D data, respectively. A 3D Nearby Attention (3DNA) mechanism is also proposed to consider the nature of the visual data and reduce the computational complexity. We evaluate N\"UWA on 8 downstream tasks. Compared to several strong baselines, N\"UWA achieves state-of-the-art results on text-to-image generation, text-to-video generation, video prediction, etc. Furthermore, it also shows surprisingly good zero-shot capabilities on text-guided image and video manipulation tasks. Project repo is https://github.com/microsoft/NUWA.

Chenfei Wu, Jian Liang, Lei Ji, Fan Yang, Yuejian Fang, Daxin Jiang, Nan Duan• 2021

Related benchmarks

TaskDatasetResultRank
Text-to-Video GenerationMSR-VTT (test)
CLIP Similarity0.2439
85
Video PredictionBAIR Robot Pushing
FVD86.9
38
Text-to-Image SynthesisMS-COCO (val)
FID12.9
35
Grounded Text-to-Image GenerationCOCO 2014 (val)
FID12.9
26
Text-to-Image GenerationMS-COCO Captions 30,000 (val)
FID-012.9
21
Text-to-Image SynthesisMSCOCO (test)
FID12.9
18
Video PredictionBAIR 64x64
FVD86.9
14
Text to ImageMSCOCO 256x256 (test)
FID (0 Ref)12.9
7
Language Guided Image InpaintingMaskCOCO
FID21.4
5
Text-to-VideoKinetics
Accuracy77.9
4
Showing 10 of 12 rows

Other info

Code

Follow for update