Overcoming Data Limitation in Medical Visual Question Answering
About
Traditional approaches for Visual Question Answering (VQA) require large amount of labeled data for training. Unfortunately, such large scale data is usually not available for medical domain. In this paper, we propose a novel medical VQA framework that overcomes the labeled data limitation. The proposed framework explores the use of the unsupervised Denoising Auto-Encoder (DAE) and the supervised Meta-Learning. The advantage of DAE is to leverage the large amount of unlabeled images while the advantage of Meta-Learning is to learn meta-weights that quickly adapt to VQA problem with limited labeled data. By leveraging the advantages of these techniques, it allows the proposed framework to be efficiently trained using a small labeled training set. The experimental results show that our proposed method significantly outperforms the state-of-the-art medical VQA.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | VQA-RAD | Closed Accuracy77.2 | 49 | |
| Visual Question Answering | VQA-RAD (test) | Open-ended Accuracy49.2 | 33 | |
| Medical Visual Question Answering | SLAKE (test) | Closed Accuracy79.8 | 29 | |
| Visual Question Answering | Slake | Closed Accuracy79.8 | 27 | |
| Visual Question Answering | PathVQA (test) | Overall Accuracy44.8 | 19 | |
| Medical Visual Question Answering | VQA 2019 (test) | Overall Accuracy77.86 | 7 | |
| Medical Visual Question Answering | VQA-Rad 2018 | Accuracy66.1 | 7 |