Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Continuous Adaptation for Interactive Object Segmentation by Learning from Corrections

About

In interactive object segmentation a user collaborates with a computer vision model to segment an object. Recent works employ convolutional neural networks for this task: Given an image and a set of corrections made by the user as input, they output a segmentation mask. These approaches achieve strong performance by training on large datasets but they keep the model parameters unchanged at test time. Instead, we recognize that user corrections can serve as sparse training examples and we propose a method that capitalizes on that idea to update the model parameters on-the-fly to the data at hand. Our approach enables the adaptation to a particular object and its background, to distributions shifts in a test set, to specific object classes, and even to large domain changes, where the imaging modality changes between training and testing. We perform extensive experiments on 8 diverse datasets and show: Compared to a model with frozen parameters, our method reduces the required corrections (i) by 9%-30% when distribution shifts are small between training and testing; (ii) by 12%-44% when specializing to a specific class; (iii) and by 60% and 77% when we completely change domain between training and testing.

Theodora Kontogianni, Michael Gygli, Jasper Uijlings, Vittorio Ferrari• 2019

Related benchmarks

TaskDatasetResultRank
Interactive SegmentationBerkeley
NoC@904.94
230
Interactive SegmentationGrabCut
NoC@903.07
225
Interactive SegmentationDAVIS--
197
Interactive SegmentationPascal VOC
NoC@853.18
43
Interactive SegmentationPASCAL VOC 12 (val)
Clicks @ 85% IoU3.18
7
Interactive SegmentationDAVIS (10% of frames)
Clicks @ 85% IoU5.16
4
Showing 6 of 6 rows

Other info

Follow for update