Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-supervised vision-language pretraining for Medical visual question answering

About

Medical image visual question answering (VQA) is a task to answer clinical questions, given a radiographic image, which is a challenging problem that requires a model to integrate both vision and language information. To solve medical VQA problems with a limited number of training data, pretrain-finetune paradigm is widely used to improve the model generalization. In this paper, we propose a self-supervised method that applies Masked image modeling, Masked language modeling, Image text matching and Image text alignment via contrastive learning (M2I2) for pretraining on medical image caption dataset, and finetunes to downstream medical VQA tasks. The proposed method achieves state-of-the-art performance on all the three public medical VQA datasets. Our codes and models are available at https://github.com/pengfeiliHEU/M2I2.

Pengfei Li, Gang Liu, Lin Tan, Jinying Liao, Shenjun Zhong• 2022

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA-RAD
Closed Accuracy83.5
49
Visual Question AnsweringSlake
Closed Accuracy91.1
27
Visual Question AnsweringPathVQA
Accuracy (Closed)88
19
Showing 3 of 3 rows

Other info

Follow for update