PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts
About
Perceiving multi-modal information and fulfilling dialogues with humans is a long-term goal of artificial intelligence. Pre-training is commonly regarded as an effective approach for multi-modal dialogue. However, due to the limited availability of multi-modal dialogue data, there is still scarce research on multi-modal dialogue pre-training. Yet another intriguing challenge emerges from the encompassing nature of multi-modal dialogue, which involves various modalities and tasks. Moreover, new forms of tasks may arise at unpredictable points in the future. Hence, it is essential for designed multi-modal dialogue models to possess sufficient flexibility to adapt to such scenarios. This paper proposes \textbf{PaCE}, a unified, structured, compositional multi-modal dialogue pre-training framework. It utilizes a combination of several fundamental experts to accommodate multiple dialogue-related tasks and can be pre-trained using limited dialogue and extensive non-dialogue multi-modal data. Furthermore, we propose a progressive training method where old experts from the past can assist new experts, facilitating the expansion of their capabilities. Experimental results demonstrate that PaCE achieves state-of-the-art results on eight multi-modal dialog benchmarks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-modal dialogue retrieval | PhotoChat (test) | R@115.2 | 29 | |
| Intent Prediction | PhotoChat (test) | F1 Score63.8 | 26 | |
| Multi-modal Response Generation | SIMMC 2.0 | BLEU34.1 | 5 | |
| Multi-modal Dialog State Tracking | SIMMC 2.0 | Slot F187 | 5 | |
| Multi-modal Intent Prediction | MMDialog (test) | F1 Score77.6 | 4 | |
| Multi-modal Dialog State Tracking | MMConv (test) | Categorical Score92.2 | 2 | |
| Multi-modal Response Generation | MMConv | Inform34.5 | 2 |