Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Jointly Discovering Visual Objects and Spoken Words from Raw Sensory Input

About

In this paper, we explore neural network models that learn to associate segments of spoken audio captions with the semantically relevant portions of natural images that they refer to. We demonstrate that these audio-visual associative localizations emerge from network-internal representations learned as a by-product of training to perform an image-audio retrieval task. Our models operate directly on the image pixels and speech waveform, and do not rely on any conventional supervision in the form of labels, segmentations, or alignments between the modalities during training. We perform analysis using the Places 205 and ADE20k datasets demonstrating that our models implicitly learn semantically-coupled object and word detectors.

David Harwath, Adri\`a Recasens, D\'idac Sur\'is, Galen Chuang, Antonio Torralba, James Glass• 2018

Related benchmarks

TaskDatasetResultRank
Speech-to-Image RetrievalPlaces audio caption dataset 1,000 image/caption (held out)
R@10.271
14
Sound Source LocalizationVGGSound Source
cIoU6.8
9
Image-to-Speech RetrievalPlaces audio caption dataset 1,000 image/caption (held out)
R@113.9
8
Image-to-Text RetrievalPlaces audio caption dataset ASR Text (held out)
R@122
6
Audio-to-image retrievalPlacesAudio (val)
Acc @1060.4
6
Image-to-Audio RetrievalPlacesAudio (val)
Acc @1052.8
6
Speech Prompted Semantic SegmentationADE20K (Evaluation)
mAP32.2
4
Sound Prompted Semantic SegmentationADE20K (Evaluation)
mAP16.8
4
Showing 8 of 8 rows

Other info

Follow for update