Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

From Phase Grounding to Intelligent Surgical Narratives

About

Video surgery timelines are an important part of tool-assisted surgeries, as they allow surgeons to quickly focus on key parts of the procedure. Current methods involve the surgeon filling out a post-operation (OP) report, which is often vague, or manually annotating the surgical videos, which is highly time-consuming. Our proposed method sits between these two extremes: we aim to automatically create a surgical timeline and narrative directly from the surgical video. To achieve this, we employ a CLIP-based multi-modal framework that aligns surgical video frames with textual gesture descriptions. Specifically, we use the CLIP visual encoder to extract representations from surgical video frames and the text encoder to embed the corresponding gesture sentences into a shared embedding space. We then fine-tune the model to improve the alignment between video gestures and textual tokens. Once trained, the model predicts gestures and phases for video frames, enabling the construction of a structured surgical timeline. This approach leverages pretrained multi-modal representations to bridge visual gestures and textual narratives, reducing the need for manual video review and annotation by surgeons.

Ethan Peterson, Huixin Zhan• 2026

Related benchmarks

TaskDatasetResultRank
Surgical Phase RecognitionCholec80 (test)--
16
Surgical Phase RecognitionCholec80
Top-5 Overall Accuracy70.35
4
Showing 2 of 2 rows

Other info

Follow for update