Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Universal Skeleton Understanding via Differentiable Rendering and MLLMs

About

Multimodal large language models (MLLMs) exhibit strong visual-language reasoning, yet remain confined to their native modalities and cannot directly process structured, non-visual data such as human skeletons. Existing methods either compress skeleton dynamics into lossy feature vectors for text alignment, or quantize motion into discrete tokens that generalize poorly across heterogeneous skeleton formats. We present SkeletonLLM, which achieves universal skeleton understanding by translating arbitrary skeleton sequences into the MLLM's native visual modality. At its core is DrAction, a differentiable, format-agnostic renderer that converts skeletal kinematics into compact image sequences. Because the pipeline is end-to-end differentiable, MLLM gradients can directly guide the rendering to produce task-informative visual tokens. To further enhance reasoning capabilities, we introduce a cooperative training strategy: Causal Reasoning Distillation transfers structured, step-by-step reasoning from a teacher model, while Discriminative Finetuning sharpens decision boundaries between confusable actions. SkeletonLLM demonstrates strong generalization in open-vocabulary action recognition, while its learned reasoning capabilities naturally extend to motion captioning and question answering across heterogeneous skeleton formats -- suggesting a viable path for applying MLLMs to non-native modalities. Code will be released upon acceptance.

Ziyi Wang, Peiming Li, Xinshun Wang, Yang Tang, Kai-Kuang Ma, Mengyuan Liu• 2026

Related benchmarks

TaskDatasetResultRank
Action RecognitionNTU-60 48/12 split
Top-1 Acc64.72
103
Action RecognitionNTU-120 96/24 split
Top-1 Acc67.2
84
Action RecognitionNTU 60 (55/5 split)
Top-1 Acc87.37
57
Action RecognitionNTU-120 110/10 split
Top-1 Acc76.05
56
Action RecognitionPKU-MMD (XSub)
Top-1 Acc90.1
43
Action RecognitionNTU 60 (40-20 seen-unseen)
Top-1 Acc46.15
18
Action RecognitionPKU-MMD cross-subject (39/12)
Top-1 Accuracy63.9
12
Action RecognitionPKU-MMD cross-view Xview (46/5)
Top-1 Accuracy89.5
12
Action RecognitionPKU-MMD cross-view Xview (39/12)
Top-1 Accuracy64.2
12
Action RecognitionNTU-60 30/30 seen/unseen
Top-1 Acc37.84
11
Showing 10 of 14 rows

Other info

Follow for update