Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Where It Moves, It Matters: Referring Surgical Instrument Segmentation via Motion

About

Enabling intuitive, language-driven interaction with surgical scenes is a critical step toward intelligent operating rooms and autonomous surgical robotic assistance. However, the task of referring segmentation, localizing surgical instruments based on natural language descriptions, remains underexplored in surgical videos, with existing approaches struggling to generalize due to reliance on static visual cues and predefined instrument names. In this work, we introduce SurgRef, a novel motion-guided framework that grounds free-form language expressions in instrument motion, capturing how tools move and interact across time, rather than what they look like. This allows models to understand and segment instruments even under occlusion, ambiguity, or unfamiliar terminology. To train and evaluate SurgRef, we present Ref-IMotion, a diverse, multi-institutional video dataset with dense spatiotemporal masks and rich motion-centric expressions. SurgRef achieves state-of-the-art accuracy and generalization across surgical procedures, setting a new benchmark for robust, language-driven surgical video segmentation.

Meng Wei, Kun Yuan, Shi Li, Yue Zhou, Long Bai, Nassir Navab, Hongliang Ren, Hong Joo Lee, Tom Vercauteren, Nicolas Padoy• 2026

Related benchmarks

TaskDatasetResultRank
Referring Video Object SegmentationRef-EndoVis tool 18 (test)
J&F84.48
11
Referring Video Object SegmentationEndoVis-IM17 (test)
J89.91
5
Referring Video Object SegmentationGraSP-IM (test)
J84.92
2
Referring Video Object SegmentationCholecSeg8k-IM (unseen test)
Jaccard Score (J)66.92
2
Showing 4 of 4 rows

Other info

Follow for update