Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Word-Like Units from Joint Audio-Visual Analysis

About

Given a collection of images and spoken audio captions, we present a method for discovering word-like acoustic units in the continuous speech signal and grounding them to semantically relevant image regions. For example, our model is able to detect spoken instances of the word 'lighthouse' within an utterance and associate them with image regions containing lighthouses. We do not use any form of conventional automatic speech recognition, nor do we use any text transcriptions or conventional linguistic annotations. Our model effectively implements a form of spoken language acquisition, in which the computer learns not only to recognize word categories by sound, but also to enrich the words it learns with semantics by grounding them in images.

David Harwath, James R. Glass• 2017

Related benchmarks

TaskDatasetResultRank
Speech-to-Image RetrievalPlaces audio caption dataset 1,000 image/caption (held out)
R@10.161
14
Image-to-Speech RetrievalPlaces audio caption dataset 1,000 image/caption (held out)
R@113
8
Image-to-Audio RetrievalPlacesAudio (val)
Acc @1054.2
6
Audio-to-image retrievalPlacesAudio (val)
Acc @1056.4
6
Showing 4 of 4 rows

Other info

Follow for update