Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning to Maximize Mutual Information for Dynamic Feature Selection

About

Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets. Here, we consider the dynamic feature selection (DFS) problem where a model sequentially queries features based on the presently available information. DFS is often addressed with reinforcement learning, but we explore a simpler approach of greedily selecting features based on their conditional mutual information. This method is theoretically appealing but requires oracle access to the data distribution, so we develop a learning approach based on amortized optimization. The proposed method is shown to recover the greedy policy when trained to optimality, and it outperforms numerous existing feature selection methods in our experiments, thus validating it as a simple but powerful approach for this problem.

Ian Covert, Wei Qiu, Mingyu Lu, Nayoon Kim, Nathan White, Su-In Lee• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationMNIST (test)
Accuracy77.94
61
ClassificationWINE (test)
Accuracy79.01
29
Tabular ClassificationDiabetes (test)
Accuracy55.87
14
Tabular Classification with Feature SelectionYeast
Accuracy0.3465
14
Tabular Classification with Feature SelectionCirrhosis
Accuracy63.22
14
Tabular Classification with Feature SelectionDiabetes
Accuracy56.55
14
Tabular Classification with Feature SelectionHeart
Accuracy66.39
14
Tabular ClassificationHeart (test)
Accuracy72.5
14
Tabular Classification with Feature SelectionWine
Accuracy63.06
14
Tabular ClassificationCirrhosis (test)
Accuracy65.87
14
Showing 10 of 12 rows

Other info

Follow for update