Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Faithful Reasoning in Comics for Small MLLMs

About

Comic-based visual question answering (CVQA) poses distinct challenges to multimodal large language models (MLLMs) due to its reliance on symbolic abstraction, narrative logic, and humor, which differ from conventional VQA tasks. Although Chain-of-Thought (CoT) prompting is widely used to enhance MLLM reasoning, surprisingly, its direct application to CVQA often degrades performance, especially in small-scale models. Our theoretical and empirical analyses reveal that standard CoT in CVQA suffers from state entanglement, spurious transitions, and exploration inefficiency, with small models particularly vulnerable in resource-constrained settings. To address these issues, we propose a novel comic reasoning framework, designed to produce more faithful and transferable reasoning chains in small MLLMs. Specifically, our framework combines modular CoT generation with GRPO-based reinforcement fine-tuning and a novel structured reward. Beyond comic VQA, we further evaluate our approach on a broader class of humor-centric and abstract visual reasoning tasks, including meme understanding and editorial cartoon interpretation. Across five challenging benchmarks, our 3B model outperforms state-of-the-art methods, and plug-in experiments yield an additional average improvement of $\mathbf{12.1\%}$ across different MLLMs.

Chengcheng Feng, Haojie Yin, Yucheng Jin, Kaizhu Huang• 2026

Related benchmarks

TaskDatasetResultRank
Deep Semantic InferenceDeepEval
Accuracy64.3
21
Humor DetectionYesBut
Accuracy62.9
21
Visual-Semantic UnderstandingCII-Bench
Overall Score44.7
21
Meme CaptioningMemeCap
BLEU-45.3
8
Visual Question AnsweringDeepEval
ACC64.3
8
Visual Question AnsweringYesBut
Accuracy62.9
8
Visual Question AnsweringCII-Bench
Accuracy44.7
8
Visual Question AnsweringNewYorker
Accuracy41.1
8
Showing 8 of 8 rows

Other info

Follow for update