Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LLM-Guided Semantic Bootstrapping for Interpretable Text Classification with Tsetlin Machines

About

Pretrained language models (PLMs) like BERT provide strong semantic representations but are costly and opaque, while symbolic models such as the Tsetlin Machine (TM) offer transparency but lack semantic generalization. We propose a semantic bootstrapping framework that transfers LLM knowledge into symbolic form, combining interpretability with semantic capacity. Given a class label, an LLM generates sub-intents that guide synthetic data creation through a three-stage curriculum (seed, core, enriched), expanding semantic diversity. A Non-Negated TM (NTM) learns from these examples to extract high-confidence literals as interpretable semantic cues. Injecting these cues into real data enables a TM to align clause logic with LLM-inferred semantics. Our method requires no embeddings or runtime LLM calls, yet equips symbolic models with pretrained semantic priors. Across multiple text classification tasks, it improves interpretability and accuracy over vanilla TM, achieving performance comparable to BERT while remaining fully symbolic and efficient.

Jiechao Gao, Rohan Kumar Yadav, Yuangang Li, Yuandong Pan, Jie Wang, Ying Liu, Michael Lepech• 2026

Related benchmarks

TaskDatasetResultRank
Topic ClassificationAG-News
Accuracy93.1
225
Text ClassificationR8
Accuracy97.88
71
Sentiment AnalysisIMDB
Accuracy92.1
67
Text ClassificationR52
Accuracy94.45
56
Sentiment AnalysisSST2
Accuracy85.24
39
Biomedical Text ClassificationHOC
micro-F181.9
8
Showing 6 of 6 rows

Other info

Follow for update