Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Highly Efficient and Effective LLMs with Multi-Boolean Architectures

About

Weight binarization has emerged as a promising strategy to reduce the complexity of large language models (LLMs). Existing approaches fall into post-training binarization, which is simple but causes severe performance loss, and training-aware methods, which depend on full-precision latent weights, adding complexity and limiting efficiency. We propose a novel framework that represents LLMs with multi-kernel Boolean parameters and, for the first time, enables direct finetuning LMMs in the Boolean domain, eliminating the need for latent weights. This enhances representational capacity and dramatically reduces complexity during both finetuning and inference. Extensive experiments across diverse LLMs show our method outperforms recent ultra low-bit quantization and binarization techniques.

Ba-Hien Tran, Van Minh Nguyen• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingC4
Perplexity6.94
1182
Language ModelingWikiText-2
Perplexity (PPL)5.14
841
Question AnsweringQA Zero-shot Average
QA Zero-shot Average62.73
57
Showing 3 of 3 rows

Other info

Follow for update