Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Where to Pay Attention in Sparse Training for Feature Selection?

About

A new line of research for feature selection based on neural networks has recently emerged. Despite its superiority to classical methods, it requires many training iterations to converge and detect informative features. The computational time becomes prohibitively long for datasets with a large number of samples or a very high dimensional feature space. In this paper, we present a new efficient unsupervised method for feature selection based on sparse autoencoders. In particular, we propose a new sparse training algorithm that optimizes a model's sparse topology during training to pay attention to informative features quickly. The attention-based adaptation of the sparse topology enables fast detection of informative features after a few training iterations. We performed extensive experiments on 10 datasets of different types, including image, speech, text, artificial, and biological. They cover a wide range of characteristics, such as low and high-dimensional feature spaces, and few and large training samples. Our proposed approach outperforms the state-of-the-art methods in terms of selecting informative features while reducing training iterations and computational costs substantially. Moreover, the experiments show the robustness of our method in extremely noisy environments.

Ghada Sokar, Zahra Atashgahi, Mykola Pechenizkiy, Decebal Constantin Mocanu• 2022

Related benchmarks

TaskDatasetResultRank
ClassificationCOIL-20
Accuracy1
76
ClusteringCOIL-20
ACC65
47
ClassificationGLIOMA
Accuracy72
46
ClassificationLung
ACC87
46
ClusteringYale
Accuracy56
37
ClassificationProstate
Accuracy81
32
ClassificationYale
Accuracy70
28
ClassificationMadelon
Accuracy87
26
ClassificationPCMAC
Accuracy83
26
ClassificationwarpPIE 10P
Accuracy95
26
Showing 10 of 36 rows

Other info

Follow for update