Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-Modal Masked Autoencoders for Medical Vision-and-Language Pre-Training

About

Medical vision-and-language pre-training provides a feasible solution to extract effective vision-and-language representations from medical images and texts. However, few studies have been dedicated to this field to facilitate medical vision-and-language understanding. In this paper, we propose a self-supervised learning paradigm with multi-modal masked autoencoders (M$^3$AE), which learn cross-modal domain knowledge by reconstructing missing pixels and tokens from randomly masked images and texts. There are three key designs to make this simple approach work. First, considering the different information densities of vision and language, we adopt different masking ratios for the input image and text, where a considerably larger masking ratio is used for images. Second, we use visual and textual features from different layers to perform the reconstruction to deal with different levels of abstraction in visual and language. Third, we develop different designs for vision and language decoders (i.e., a Transformer for vision and a multi-layer perceptron for language). To perform a comprehensive evaluation and facilitate further research, we construct a medical vision-and-language benchmark including three tasks. Experimental results demonstrate the effectiveness of our approach, where state-of-the-art results are achieved on all downstream tasks. Besides, we conduct further analysis to better verify the effectiveness of different components of our approach and various settings of pre-training. The source code is available at~\url{https://github.com/zhjohnchan/M3AE}.

Zhihong Chen, Yuhao Du, Jinpeng Hu, Yang Liu, Guanbin Li, Xiang Wan, Tsung-Hui Chang• 2022

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA-RAD
Closed Accuracy83.46
49
Visual Question AnsweringVQA-RAD (test)
Open-ended Accuracy67.23
33
Medical Visual Question AnsweringSLAKE (test)
Closed Accuracy87.82
29
Visual Question AnsweringSlake
Closed Accuracy87.82
27
ClassificationRad-ChestCT
AUC72.2
25
ClassificationCC-CCII
Accuracy89.9
24
ClassificationCT-RATE
AUC0.81
24
ClassificationLUNA16
AUC0.708
16
Image-to-Text RetrievalROCO (test)
R@122.9
9
Text-to-Image RetrievalROCO (test)
R@119.05
9
Showing 10 of 15 rows

Other info

Code

Follow for update