Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Recognizing Surgical Activities with Recurrent Neural Networks

About

We apply recurrent neural networks to the task of recognizing surgical activities from robot kinematics. Prior work in this area focuses on recognizing short, low-level activities, or gestures, and has been based on variants of hidden Markov models and conditional random fields. In contrast, we work on recognizing both gestures and longer, higher-level activites, or maneuvers, and we model the mapping from kinematics to gestures/maneuvers with recurrent neural networks. To our knowledge, we are the first to apply recurrent neural networks to this task. Using a single model and a single set of hyperparameters, we match state-of-the-art performance for gesture recognition and advance state-of-the-art performance for maneuver recognition, in terms of both accuracy and edit distance. Code is available at https://github.com/rdipietro/miccai-2016-surgical-activity-rec .

Robert DiPietro, Colin Lea, Anand Malpani, Narges Ahmidi, S. Swaroop Vedula, Gyusung I. Lee, Mija R. Lee, Gregory D. Hager• 2016

Related benchmarks

TaskDatasetResultRank
Action SegmentationJIGSAWS
Accuracy83.3
19
Action RecognitionJIGSAWS Suturing (LOSO)
Per-frame Accuracy83.3
18
Surgical Gesture SegmentationJIGSAWS Kinematic suturing task
Accuracy83.3
9
Action Segmentation50 Salads (eval setup)
Edit Distance54.5
9
Action Segmentation and Recognition50 Salads eval granularity
Accuracy73.3
4
Gesture RecognitionJIGSAWS (Leave-one-user-out)--
3
Showing 6 of 6 rows

Other info

Code

Follow for update