Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MultiFinRAG: An Optimized Multimodal Retrieval-Augmented Generation (RAG) Framework for Financial Question Answering

About

Financial documents--such as 10-Ks, 10-Qs, and investor presentations--span hundreds of pages and combine diverse modalities, including dense narrative text, structured tables, and complex figures. Answering questions over such content often requires joint reasoning across modalities, which strains traditional large language models (LLMs) and retrieval-augmented generation (RAG) pipelines due to token limitations, layout loss, and fragmented cross-modal context. We introduce MultiFinRAG, a retrieval-augmented generation framework purpose-built for financial QA. MultiFinRAG first performs multimodal extraction by grouping table and figure images into batches and sending them to a lightweight, quantized open-source multimodal LLM, which produces both structured JSON outputs and concise textual summaries. These outputs, along with narrative text, are embedded and indexed with modality-aware similarity thresholds for precise retrieval. A tiered fallback strategy then dynamically escalates from text-only to text+table+image contexts when necessary, enabling cross-modal reasoning while reducing irrelevant context. Despite running on commodity hardware, MultiFinRAG achieves 19 percentage points higher accuracy than ChatGPT-4o (free-tier) on complex financial QA tasks involving text, tables, images, and combined multimodal reasoning.

Chinmay Gondhalekar, Urjitkumar Patel, Fang-Chun Yeh• 2025

Related benchmarks

TaskDatasetResultRank
Financial Question AnsweringMultiFinRAG Text-based
Accuracy90.4
5
Financial Question AnsweringMultiFinRAG Image-based
Accuracy66.7
5
Financial Question AnsweringMultiFinRAG Table-based split
Accuracy69.4
5
Financial Question AnsweringMultiFinRAG Text+Image Table-based
Accuracy40
5
Financial Question AnsweringMultiFinRAG Total
Accuracy75.3
5
Showing 5 of 5 rows

Other info

Follow for update