Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AudioGAN: A Compact and Efficient Framework for Real-Time High-Fidelity Text-to-Audio Generation

About

Text-to-audio (TTA) generation can significantly benefit the media industry by reducing production costs and enhancing work efficiency. However, most current TTA models (primarily diffusion-based) suffer from slow inference speeds and high computational costs. In this paper, we introduce AudioGAN, the first successful Generative Adversarial Networks (GANs)-based TTA framework that generates audio in a single pass, thereby reducing model complexity and inference time. To overcome the inherent difficulties in training GANs, we integrate multiple ,contrastive losses and propose innovative components Single-Double-Triple (SDT) Attention and Time-Frequency Cross-Attention (TF-CA). Extensive experiments on the AudioCaps dataset demonstrate that AudioGAN achieves state-of-the-art performance while using 90% fewer parameters and running 20 times faster, synthesizing audio in under one second. These results establish AudioGAN as a practical and powerful solution for real-time TTA.

HaeChun Chung• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Audio GenerationAudioCaps (test)
FAD1.38
138
Showing 1 of 1 rows

Other info

Follow for update