SPRig: Self-Supervised Pose-Invariant Rigging from Mesh Sequences
About
State-of-the-art rigging methods assume a canonical rest pose--an assumption that fails for sequential data (e.g., animal motion capture or AIGC/video-derived mesh sequences) that lack the T-pose. Applied frame-by-frame, these methods are not pose-invariant and produce topological inconsistencies across frames. Thus We propose SPRig, a general fine-tuning framework that enforces cross-frame consistency losses to learn pose-invariant rigs on top of existing models. We validate our approach on rigging using a new permutation-invariant stability protocol. Experiments demonstrate SOTA temporal stability: our method produces coherent rigs from challenging sequences and dramatically reduces the artifacts that plague baseline methods. The code will be released publicly upon acceptance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Skinning weight prediction | Articulation-XL | Precision86.3 | 5 | |
| Skinning weight prediction | ModelsResource | Precision0.732 | 5 | |
| Skeleton Generation | DeformingThings4D | PJDD0.68 | 3 | |
| Skinning | DT4D | L1 Error (B, C -> A)925.8 | 3 | |
| Static Generation Quality | Articulation-XL v2 | CD-J2J0.027 | 2 | |
| Static skinning prediction | Diverse-pose | Precision84.2 | 2 | |
| Temporal Stability | DeformingThings4D (DT4D) (val) | PJDD0.68 | 2 |