Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

TT-Sparse: Learning Sparse Rule Models with Differentiable Truth Tables

About

Interpretable machine learning is essential in high-stakes domains where decision-making requires accountability, transparency, and trust. While rule-based models offer global and exact interpretability, learning rule sets that simultaneously achieve high predictive performance and low, human-understandable complexity remains challenging. To address this, we introduce TT-Sparse, a flexible neural building block that leverages differentiable truth tables as nodes to learn sparse, effective connections. A key contribution of our approach is a new soft TopK operator with straight-through estimation for learning discrete, cardinality-constrained feature selection in an end-to-end differentiable manner. Crucially, the forward pass remains sparse, enabling efficient computation and exact symbolic rule extraction. As a result, each node (and the entire model) can be transformed exactly into compact, globally interpretable DNF/CNF Boolean formulas via Quine-McCluskey minimization. Extensive empirical results across 28 datasets spanning binary, multiclass, and regression tasks show that the learned sparse rules exhibit superior predictive performance with lower complexity compared to existing state-of-the-art methods.

Hans Farrell Soegeng, Sarthak Ketanbhai Modi, Thomas Peyrin• 2026

Related benchmarks

TaskDatasetResultRank
RegressionCalifornia
R2 Score71.02
40
Binary ClassificationDiabetes
AUC0.8208
34
Multi-class classificationYeast--
20
Binary ClassificationHeart
Mean AUC93.08
17
Binary ClassificationElectricity
AUC88.23
12
Binary Classificationblood
AUC75.54
10
Binary Classificationcc default
AUC0.7915
10
Binary ClassificationBank
AUC90.96
10
Binary Classificationcalhousing
AUC93.07
10
Binary ClassificationCOMPAS
AUC73.01
10
Showing 10 of 26 rows

Other info

Follow for update