Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RMPL: Relation-aware Multi-task Progressive Learning with Stage-wise Training for Multimedia Event Extraction

About

Multimedia Event Extraction (MEE) aims to identify events and their arguments from documents that contain both text and images. It requires grounding event semantics across different modalities. Progress in MEE is limited by the lack of annotated training data. M2E2 is the only established benchmark, but it provides annotations only for evaluation. This makes direct supervised training impractical. Existing methods mainly rely on cross-modal alignment or inference-time prompting with Vision--Language Models (VLMs). These approaches do not explicitly learn structured event representations and often produce weak argument grounding in multimodal settings. To address these limitations, we propose RMPL, a Relation-aware Multi-task Progressive Learning framework for MEE under low-resource conditions. RMPL incorporates heterogeneous supervision from unimodal event extraction and multimedia relation extraction with stage-wise training. The model is first trained with a unified schema to learn shared event-centric representations across modalities. It is then fine-tuned for event mention identification and argument role extraction using mixed textual and visual data. Experiments on the M2E2 benchmark with multiple VLMs show consistent improvements across different modality settings.

Yongkang Jin, Jianwen Luo, Jingjing Wang, Jianmin Yao, Yu Hong• 2026

Related benchmarks

TaskDatasetResultRank
Argument Role ExtractionM2E2 multimedia
F1 Score46.9
15
Event Mention IdentificationM2E2 multimedia
F1 Score92.4
15
Argument Role ExtractionM2E2 image-only
Precision66.4
14
Event Mention IdentificationM2E2 image-only
Precision (%)83.6
14
Event Mention IdentificationM2E2 text-only
Precision85.6
13
Argument Role ExtractionM2E2 text-only
Precision50.6
13
Showing 6 of 6 rows

Other info

Follow for update