Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin Machine

About

In this article, we introduce a novel variant of the Tsetlin machine (TM) that randomly drops clauses, the key learning elements of a TM. In effect, TM with drop clause ignores a random selection of the clauses in each epoch, selected according to a predefined probability. In this way, additional stochasticity is introduced in the learning phase of TM. To explore the effects drop clause has on accuracy, training time, interpretability and robustness, we conduct extensive experiments on nine benchmark datasets in natural language processing~(NLP) (IMDb, R8, R52, MR and TREC) and image classification (MNIST, Fashion MNIST, CIFAR-10 and CIFAR-100). Our proposed model outperforms baseline machine learning algorithms by a wide margin and achieves competitive performance in comparison with recent deep learning model such as BERT and AlexNET-DFA. In brief, we observe up to +10% increase in accuracy and 2x to 4x faster learning compared with standard TM. We further employ the Convolutional TM to document interpretable results on the CIFAR datasets, visualizing how the heatmaps produced by the TM become more interpretable with drop clause. We also evaluate how drop clause affects learning robustness by introducing corruptions and alterations in the image/language test data. Our results show that drop clause makes TM more robust towards such changes.

Jivitesh Sharma, Rohan Yadav, Ole-Christoffer Granmo, Lei Jiao• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy45.2
3518
Image ClassificationCIFAR-10 (test)
Accuracy75.1
3381
Image ClassificationMNIST (test)
Accuracy99.45
882
Image ClassificationFashion MNIST (test)
Accuracy92.5
568
Text ClassificationTREC
Accuracy90.5
179
Text ClassificationIMDB
Accuracy91.27
107
Text ClassificationMR (test)
Accuracy78.67
99
Text ClassificationMR
Accuracy78.67
93
Text ClassificationIMDB (test)
CA91.27
79
Image ClassificationF-MNIST (test)
Accuracy92.5
64
Showing 10 of 15 rows

Other info

Code

Follow for update