Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Efficient generative adversarial networks using linear additive-attention Transformers

About

Although the capacity of deep generative models for image generation, such as Diffusion Models (DMs) and Generative Adversarial Networks (GANs), has dramatically improved in recent years, much of their success can be attributed to computationally expensive architectures. This has limited their adoption and use to research laboratories and companies with large resources, while significantly raising the carbon footprint for training, fine-tuning, and inference. In this work, we present a novel GAN architecture which we call LadaGAN. This architecture is based on a linear attention Transformer block named Ladaformer. The main component of this block is a linear additive-attention mechanism that computes a single attention vector per head instead of the quadratic dot-product attention. We employ Ladaformer in both the generator and discriminator, which reduces the computational complexity and overcomes the training instabilities often associated with Transformer GANs. LadaGAN consistently outperforms existing convolutional and Transformer GANs on benchmark datasets at different resolutions while being significantly more efficient. Moreover, LadaGAN shows competitive performance compared to state-of-the-art multi-step generative models (e.g. DMs) using orders of magnitude less computational resources.

Emilio Morales-Juarez, Gibran Fuentes-Pineda• 2024

Related benchmarks

TaskDatasetResultRank
Image GenerationCIFAR-10 32x32
FID3.29
147
Image GenerationCelebA 64x64 50k samples
FID1.81
7
Image GenerationLSUN Bedroom 128x128 (30k samples)
FID5.08
6
Image GenerationCelebA 64x64 19k samples
FID2.89
3
Image GenerationLSUN Bedroom 256x256 (50k samples)
FID6.36
3
Image GenerationFFHQ 128x128 (70k samples)
FID4.48
3
Showing 6 of 6 rows

Other info

Code

Follow for update