Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Extracting Deformation-Aware Local Features by Learning to Deform

About

Despite the advances in extracting local features achieved by handcrafted and learning-based descriptors, they are still limited by the lack of invariance to non-rigid transformations. In this paper, we present a new approach to compute features from still images that are robust to non-rigid deformations to circumvent the problem of matching deformable surfaces and objects. Our deformation-aware local descriptor, named DEAL, leverages a polar sampling and a spatial transformer warping to provide invariance to rotation, scale, and image deformations. We train the model architecture end-to-end by applying isometric non-rigid deformations to objects in a simulated environment as guidance to provide highly discriminative local features. The experiments show that our method outperforms state-of-the-art handcrafted, learning-based image, and RGB-D descriptors in different datasets with both real and realistic synthetic deformable objects in still images. The source code and trained model of the descriptor are publicly available at https://www.verlab.dcc.ufmg.br/descriptors/neurips2021.

Guilherme Potje, Renato Martins, Felipe Cadar, Erickson R. Nascimento• 2021

Related benchmarks

TaskDatasetResultRank
Image MatchingSimulation
MS36
38
Image MatchingDeSurT (833 pairs total)
MS Score33
38
Image MatchingKinect 1
MS0.44
38
Image MatchingKinect 2
Matching Score (MS)0.49
38
Image MatchingHPatches (full)
MMA (Viewpoint)33
21
Non-rigid trackingNon-rigid tracking sequences (average across sequences)
Inliers RANSAC46
6
Non-rigid 3D Surface RegistrationDeformable objects
2D Acc @ 2px29.4
6
Showing 7 of 7 rows

Other info

Code

Follow for update