Cross-Modal Music-Video Recommendation: A Study of Design Choices
About
In this work, we study music/video cross-modal recommendation, i.e. recommending a music track for a video or vice versa. We rely on a self-supervised learning paradigm to learn from a large amount of unlabelled data. We rely on a self-supervised learning paradigm to learn from a large amount of unlabelled data. More precisely, we jointly learn audio and video embeddings by using their co-occurrence in music-video clips. In this work, we build upon a recent video-music retrieval system (the VM-NET), which originally relies on an audio representation obtained by a set of statistics computed over handcrafted features. We demonstrate here that using audio representation learning such as the audio embeddings provided by the pre-trained MuSimNet, OpenL3, MusicCNN or by AudioSet, largely improves recommendations. We also validate the use of the cross-modal triplet loss originally proposed in the VM-NET compared to the binary cross-entropy loss commonly used in self-supervised learning. We perform all our experiments using the Music Video Dataset (MVD).
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Segment-level Music-to-Video Retrieval | MusicVid-YT8M (test) | Median Rank277 | 10 | |
| Segment-level Video-to-Music Retrieval | MusicVid-YT8M (test) | Median Rank349 | 10 | |
| Music-to-Video Retrieval | MusicVid-YT8M track-level (test) | Median Rank98 | 7 | |
| Video-to-Music Retrieval | MusicVid-YT8M track-level (test) | Median Rank234 | 7 | |
| Music Retrieval | YouTube8M MusicVideo (test) | Median Rank234 | 6 |