Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Revisiting Data Compression with Language Modeling

About

In this report, we investigate the potential use of large language models (LLM's) in the task of data compression. Previous works have demonstrated promising results in applying LLM's towards compressing not only text, but also a wide range of multi-modal data. Despite the favorable performance achieved, there still remains several practical questions that pose a challenge towards replacing existing data compression algorithms with LLM's. In this work, we explore different methods to achieve a lower adjusted compression rate using LLM's as data compressors. In comparison to previous works, we were able to achieve a new state-of-the-art (SOTA) adjusted compression rate of around $18\%$ on the enwik9 dataset without additional model training. Furthermore, we explore the use of LLM's in compressing non-English data, code data, byte stream sequences. We show that while LLM's excel in compressing data in text-dominant domains, their ability in compressing non-natural text sequences still remain competitive if configured in the right way.

Chen-Han Tsai• 2026

Related benchmarks

TaskDatasetResultRank
Data Compressionenwik9 1GB (test)
Raw Compression Rate (γr)0.085
37
Showing 1 of 1 rows

Other info

Follow for update