| Task Name | Dataset Name | SOTA Result | Trend | |
|---|---|---|---|---|
| Multi-bit LLM Watermarking | LLaMA3-8B-Base Max 256 Tokens | AUC1 | 20 | |
| Multi-bit LLM Watermarking | LLaMA3-8B-Base Max 128 Tokens | AUC1 | 20 | |
| Jailbreak attack | Llama3-8b | Average ASR0 | 16 | |
| Jailbreak Attack | llama3-8b pretrained v1 | ASR0 | 13 | |
| Defending against gradient-based attacks | Llama3 AutoDAN Attack (test) | ASR10.57 | 10 | |
| Training Throughput | Llama3 8B (train) | Throughput (128K SeqLen)2,320.47 | 5 | |
| Training Memory Usage Profiling | Llama3-8B 8×H100s | Peak Memory Usage (128K)21.1 | 5 | |
| Quantization | LLaMA3-8B | Averaged Quantization Time (s)27 | 4 | |
| Model Compression | Llama3-1b | Energy Consumed (kWh)0.0765 | 2 |