Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Surgical Skill Assessment via Video Semantic Aggregation

About

Automated video-based assessment of surgical skills is a promising task in assisting young surgical trainees, especially in poor-resource areas. Existing works often resort to a CNN-LSTM joint framework that models long-term relationships by LSTMs on spatially pooled short-term CNN features. However, this practice would inevitably neglect the difference among semantic concepts such as tools, tissues, and background in the spatial dimension, impeding the subsequent temporal relationship modeling. In this paper, we propose a novel skill assessment framework, Video Semantic Aggregation (ViSA), which discovers different semantic parts and aggregates them across spatiotemporal dimensions. The explicit discovery of semantic parts provides an explanatory visualization that helps understand the neural network's decisions. It also enables us to further incorporate auxiliary information such as the kinematic data to improve representation learning and performance. The experiments on two datasets show the competitiveness of ViSA compared to state-of-the-art methods. Source code is available at: bit.ly/MICCAI2022ViSA.

Zhenqiang Li, Lin Gu, Weimin Wang, Ryosuke Nakamura, Yoichi Sato• 2022

Related benchmarks

TaskDatasetResultRank
Surgical Skill AssessmentJIGSAWS Across Tasks
SCC0.9
7
Surgical Skill AssessmentJIGSAWS Knot Tying (independent)
SCC0.92
6
Surgical Skill AssessmentJIGSAWS Needle Passing (independent)
SCC0.93
6
Surgical Skill AssessmentJIGSAWS Suturing (independent)
SCC0.84
6
Showing 4 of 4 rows

Other info

Follow for update