Large Language Model as Token Compressor and Decompressor
About
In this paper, we establish the novel insight that an off-the-shelf LLM can function as an excellent token compressor and decompressor. To demonstrate, we design a self-expressive autoencoding learning framework fine-tunes a pretrained LLM to translate long texts into a compact internal language of discrete, variable-length latent codes, termed Z-tokens, and to reconstruct the original text exactly from them. The resulting representation is content-adaptive: semantically dense segments receive more Z-tokens, while redundant or predictable regions are aggressively compressed, via lightweight LoRA-based adapter heads. Empirically, our method achieves up to 18 times token reduction on Wikipedia, CNN/DailyMail, HotpotQA, and Qulac-style long-query datasets, while preserving reconstruction fidelity and downstream performance. This simple yet effective design supports applications including prompt compression and autoregressive generation directly in the Z-token space, offering a potential pathway toward token-efficient long-context reasoning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Question Answering | NarrativeQA (test) | -- | 68 | |
| Question Answering | QASPER (test) | F1 Score (Match)18.31 | 27 | |
| Text Summarization | CNN/DailyMail | ROUGE-132.58 | 13 | |
| Reconstruction | Wikipedia | BLEU-499.31 | 8 | |
| Question Answering | QuALITY (test) | F1 Score39.25 | 6 | |
| Question Answering | HotpotQA (test) | F1 Score33.35 | 6 |