Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AVCap: Leveraging Audio-Visual Features as Text Tokens for Captioning

About

In recent years, advancements in representation learning and language models have propelled Automated Captioning (AC) to new heights, enabling the generation of human-level descriptions. Leveraging these advancements, we propose AVCap, an Audio-Visual Captioning framework, a simple yet powerful baseline approach applicable to audio-visual captioning. AVCap utilizes audio-visual features as text tokens, which has many advantages not only in performance but also in the extensibility and scalability of the model. AVCap is designed around three pivotal dimensions: the exploration of optimal audio-visual encoder architectures, the adaptation of pre-trained models according to the characteristics of generated text, and the investigation into the efficacy of modality fusion in captioning. Our method outperforms existing audio-visual captioning methods across all metrics and the code is available on https://github.com/JongSuk1/AVCap

Jongsuk Kim, Jiwon Shin, Junmo Kim• 2024

Related benchmarks

TaskDatasetResultRank
Audio CaptioningAudioCaps (test)
CIDEr75.8
140
Showing 1 of 1 rows

Other info

Follow for update