Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Photon: Speedup Volume Understanding with Efficient Multimodal Large Language Models

About

Multimodal large language models are promising for clinical visual question answering tasks, but scaling to 3D imaging is hindered by high computational costs. Prior methods often rely on 2D slices or fixed-length token compression, disrupting volumetric continuity and obscuring subtle findings. We present Photon, a framework that represents 3D medical volumes with token sequences of variable length. Photon introduces instruction-conditioned token scheduling and surrogate gradient propagation to adaptively reduce tokens during both training and inference, which lowers computational cost while mitigating the attention dilution caused by redundant tokens. It incorporates a custom backpropagation rule with gradient restoration to enable differentiable optimization despite discrete token drop. To stabilize token compression and ensure reliable use of visual evidence, Photon further applies regularization objectives that mitigate language-only bias and improve reliability. Experiments on diverse medical visual question answering tasks show that Photon achieves state-of-the-art accuracy while reducing resource usage and accelerating both training and inference.

Chengyu Fang, Heng Guo, Zheng Jiang, Chunming He, Xiu Li, Minfeng Xu• 2026

Related benchmarks

TaskDatasetResultRank
Medical Visual Question AnsweringSlake
Accuracy84.25
239
Medical ReasoningDeepTumorVQA
Fatty Liver Assessment77.3
13
Medical Visual Question AnsweringDeepTumorVQA
Average Score68.6
13
Visual ReasoningDeepTumorVQA
Adjacent Organ Score69.6
13
RecognitionDeepTumorVQA
Colon Lesion Existence88.1
13
MeasurementDeepTumorVQA
Lesion Volume Score82.5
13
Anomaly Detection3D-RAD
BLEU42.33
9
Existence Detection3D-RAD
Accuracy83.07
9
Image Observation3D-RAD
BLEU51.59
9
Longitudinal Temporal Diagnosis3D-RAD
Accuracy77.01
9
Showing 10 of 13 rows

Other info

Follow for update