Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cross-Modal Music-Video Recommendation: A Study of Design Choices

About

In this work, we study music/video cross-modal recommendation, i.e. recommending a music track for a video or vice versa. We rely on a self-supervised learning paradigm to learn from a large amount of unlabelled data. We rely on a self-supervised learning paradigm to learn from a large amount of unlabelled data. More precisely, we jointly learn audio and video embeddings by using their co-occurrence in music-video clips. In this work, we build upon a recent video-music retrieval system (the VM-NET), which originally relies on an audio representation obtained by a set of statistics computed over handcrafted features. We demonstrate here that using audio representation learning such as the audio embeddings provided by the pre-trained MuSimNet, OpenL3, MusicCNN or by AudioSet, largely improves recommendations. We also validate the use of the cross-modal triplet loss originally proposed in the VM-NET compared to the binary cross-entropy loss commonly used in self-supervised learning. We perform all our experiments using the Music Video Dataset (MVD).

Laure Pretet, Gael Richard, Geoffroy Peeters• 2021

Related benchmarks

TaskDatasetResultRank
Segment-level Music-to-Video RetrievalMusicVid-YT8M (test)
Median Rank277
10
Segment-level Video-to-Music RetrievalMusicVid-YT8M (test)
Median Rank349
10
Music-to-Video RetrievalMusicVid-YT8M track-level (test)
Median Rank98
7
Video-to-Music RetrievalMusicVid-YT8M track-level (test)
Median Rank234
7
Music RetrievalYouTube8M MusicVideo (test)
Median Rank234
6
Showing 5 of 5 rows

Other info

Follow for update