UniSurg: A Video-Native Foundation Model for Universal Understanding of Surgical Videos
About
While foundation models have advanced surgical video analysis, current approaches rely predominantly on pixel-level reconstruction objectives that waste model capacity on low-level visual details - such as smoke, specular reflections, and fluid motion - rather than semantic structures essential for surgical understanding. We present UniSurg, a video-native foundation model that shifts the learning paradigm from pixel-level reconstruction to latent motion prediction. Built on the Video Joint Embedding Predictive Architecture (V-JEPA), UniSurg introduces three key technical innovations tailored to surgical videos: 1) motion-guided latent prediction to prioritize semantically meaningful regions, 2) spatiotemporal affinity self-distillation to enforce relational consistency, and 3) feature diversity regularization to prevent representation collapse in texture-sparse surgical scenes. To enable large-scale pretraining, we curate UniSurg-15M, the largest surgical video dataset to date, comprising 3,658 hours of video from 50 sources across 13 anatomical regions. Extensive experiments across 17 benchmarks demonstrate that UniSurg significantly outperforms state-of-the-art methods on surgical workflow recognition (+14.6% F1 on EgoSurgery, +10.3% on PitVis), action triplet recognition (39.54% mAP-IVT on CholecT50), skill assessment, polyp segmentation, and depth estimation. These results establish UniSurg as a new standard for universal, motion-oriented surgical video understanding.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Surgical Phase Recognition | Cholec80 | Average F184.17 | 35 | |
| Action Triplet Recognition | CholecT50 | AP (I)91.55 | 27 | |
| Action Quality Assessment | JIGSAWS | -- | 20 | |
| Action Recognition | SurgicalActions160 (test) | Accuracy75.63 | 14 | |
| Action Recognition | PolypDiag (test) | Accuracy98.81 | 14 | |
| Depth Estimation | C3VD | RMSE1.88 | 14 | |
| Surgical workflow recognition | OphNet | Accuracy73.04 | 14 | |
| Surgical workflow recognition | PMLR 50 | Accuracy91.91 | 14 | |
| Surgical workflow recognition | Autolaparo | Accuracy86.37 | 14 | |
| Surgical workflow recognition | M2CAI 2016 | Accuracy89.45 | 14 |