| Dataset Name | SOTA Method | Metric | Trend | ||
|---|---|---|---|---|---|
| Long-Range Arena (LRA) | Performer | Steps per second99.6 | 84 | 1mo ago | |
| HCP-WM | Mamba | Inference Time (ms)0.33 | 16 | 1mo ago | |
| NVIDIA RTX 3090 256 x 256 inputs | MOCE-IR | Params (M)11.48 | 7 | 13d ago | |
| Visual In-Context Learning Evaluation Set | MAE-VQGAN | Inference Time (ms)51.26 | 7 | 29d ago | |
| Context Length 32K | Theoretical Compute (TFLOPs)928 | 5 | 1mo ago | ||
| Context Length 16K | Theoretical Compute (TFLOPs)336 | 5 | 1mo ago | ||
| Context Length 4K | Theoretical Compute (TFLOPs)60 | 5 | 1mo ago | ||
| Standard Transformer Pipeline | Vanilla | TFLOPs5.973 | 5 | 1mo ago | |
| ImageNet-100 (train) | Collab | Storage Usage (GB)0.13 | 5 | 1mo ago | |
| UTKFace | Collab | Storage Usage (MB)82.8 | 5 | 1mo ago | |
| IoT network metadata 7 worlds | Autoencoder | Mean Latency (µs)2.03 | 4 | 1mo ago | |
| Efficiency Evaluation NVIDIA Tesla T4 GPU | GLAM - CV | Inference Time (ms) GPU4 | 4 | 1mo ago | |
| Alignment (train) | RLOO | Training Time5 | 4 | 1mo ago |