Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Meshed-Memory Transformer for Image Captioning

About

Transformer-based architectures represent the state of the art in sequence modeling tasks like machine translation and language understanding. Their applicability to multi-modal contexts like image captioning, however, is still largely under-explored. With the aim of filling this gap, we present M$^2$ - a Meshed Transformer with Memory for Image Captioning. The architecture improves both the image encoding and the language generation steps: it learns a multi-level representation of the relationships between image regions integrating learned a priori knowledge, and uses a mesh-like connectivity at decoding stage to exploit low- and high-level features. Experimentally, we investigate the performance of the M$^2$ Transformer and different fully-attentive models in comparison with recurrent ones. When tested on COCO, our proposal achieves a new state of the art in single-model and ensemble configurations on the "Karpathy" test split and on the online test server. We also assess its performances when describing objects unseen in the training set. Trained models and code for reproducing the experiments are publicly available at: https://github.com/aimagelab/meshed-memory-transformer.

Marcella Cornia, Matteo Stefanini, Lorenzo Baraldi, Rita Cucchiara• 2019

Related benchmarks

TaskDatasetResultRank
Image CaptioningMS COCO Karpathy (test)
CIDEr1.345
682
Radiology Report GenerationMIMIC-CXR (test)
BLEU-40.101
172
Image CaptioningMS-COCO (test)
CIDEr80.4
120
Image Captioningnocaps (val)
CIDEr (Overall)75
115
Image CaptioningFlickr30k (test)
CIDEr68.4
103
Image CaptioningMS-COCO online (test)
BLEU-4 (c5)39.7
64
Medical Report GenerationMIMIC-CXR (test)
ROUGE-L0.264
62
Image CaptioningMS COCO (Karpathy)
CIDEr-D131.2
56
Medical Report GenerationIU-Xray (test)
ROUGE-L0.328
56
Findings GenerationIU-Xray (test)
BLEU-10.437
47
Showing 10 of 30 rows

Other info

Code

Follow for update