Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Building Deep Networks on Grassmann Manifolds

About

Learning representations on Grassmann manifolds is popular in quite a few visual recognition tasks. In order to enable deep learning on Grassmann manifolds, this paper proposes a deep network architecture by generalizing the Euclidean network paradigm to Grassmann manifolds. In particular, we design full rank mapping layers to transform input Grassmannian data to more desirable ones, exploit re-orthonormalization layers to normalize the resulting matrices, study projection pooling layers to reduce the model complexity in the Grassmannian context, and devise projection mapping layers to respect Grassmannian geometry and meanwhile achieve Euclidean forms for regular output layers. To train the Grassmann networks, we exploit a stochastic gradient descent setting on manifolds of the connection weights, and study a matrix generalization of backpropagation to update the structured data. The evaluations on three visual recognition tasks show that our Grassmann networks have clear advantages over existing Grassmann learning methods, and achieve results comparable with state-of-the-art approaches.

Zhiwu Huang, Jiqing Wu, Luc Van Gool• 2016

Related benchmarks

TaskDatasetResultRank
EEG signal classificationMAMEM-SSVEP-II
Accuracy61.23
29
Video-based 3D action recognitionFPHA
Accuracy78.79
8
3D Action RecognitionHDM05
Accuracy59.23
7
Showing 3 of 3 rows

Other info

Follow for update