Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Recurrent Fusion Network for Image Captioning

About

Recently, much advance has been made in image captioning, and an encoder-decoder framework has been adopted by all the state-of-the-art models. Under this framework, an input image is encoded by a convolutional neural network (CNN) and then translated into natural language with a recurrent neural network (RNN). The existing models counting on this framework merely employ one kind of CNNs, e.g., ResNet or Inception-X, which describe image contents from only one specific view point. Thus, the semantic meaning of an input image cannot be comprehensively understood, which restricts the performance of captioning. In this paper, in order to exploit the complementary information from multiple encoders, we propose a novel Recurrent Fusion Network (RFNet) for tackling image captioning. The fusion process in our model can exploit the interactions among the outputs of the image encoders and then generate new compact yet informative representations for the decoder. Experiments on the MSCOCO dataset demonstrate the effectiveness of our proposed RFNet, which sets a new state-of-the-art for image captioning.

Wenhao Jiang, Lin Ma, Yu-Gang Jiang, Wei Liu, Tong Zhang• 2018

Related benchmarks

TaskDatasetResultRank
Image CaptioningMS COCO Karpathy (test)
CIDEr1.257
682
Image CaptioningMS-COCO (test)--
117
Image CaptioningMS COCO (Karpathy)
CIDEr-D121.9
56
Image CaptioningMS-COCO online (test)
BLEU-4 (c5)38
49
Image CaptioningCOCO c5 references online (test)
BLEU-180.4
24
Image CaptioningMSCOCO (test server)
BLEU-4 (c5)38
22
Image CaptioningMS COCO 40,775 images (test)
CIDEr125.1
15
Showing 7 of 7 rows

Other info

Follow for update