Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CLAD: Efficient Log Anomaly Detection Directly on Compressed Representations

About

The explosive growth of system logs makes streaming compression essential, yet existing log anomaly detection (LAD) methods incur severe pre-processing overhead by requiring full decompression and parsing. We introduce CLAD, the first deep learning framework to perform LAD directly on compressed byte streams. CLAD bypasses these bottlenecks by exploiting a key insight: normal logs compress into regular byte patterns, while anomalies systematically disrupt them. To extract these multi-scale deviations from opaque bytes, we propose a purpose-built architecture integrating a dilated convolutional byte encoder, a hybrid Transformer--mLSTM, and four-way aggregation pooling. This is coupled with a two-stage training strategy of masked pre-training and focal-contrastive fine-tuning to effectively handle severe class imbalance. Evaluated across five datasets, CLAD achieves a state-of-the-art average F1-score of 0.9909 and outperforms the best baseline by 2.72 percentage points. It delivers superior accuracy while completely eliminating decompression and parsing overheads, offering a robust solution that generalizes to structured streaming compressors.

Benzhao Tang, Shiyu Yang• 2026

Related benchmarks

TaskDatasetResultRank
Log Anomaly DetectionHDFS
F1 Score99.4
17
Log Anomaly DetectionSPIRIT
F1 Score99.98
17
Log Anomaly DetectionBGL
F1 Score96.45
17
Log Anomaly DetectionLiberty
Precision99.7
4
Showing 4 of 4 rows

Other info

Follow for update