MobileLLM-Flash: Latency-Guided On-Device LLM Design for Industry Scale Deployment
About
Real-time AI experiences call for on-device large language models (OD-LLMs) optimized for efficient deployment on resource-constrained hardware. The most useful OD-LLMs produce near-real-time responses and exhibit broad hardware compatibility, maximizing user reach. We present a methodology for designing such models using hardware-in-the-loop architecture search under mobile latency constraints. This system is amenable to industry-scale deployment: it generates models deployable without custom kernels and compatible with standard mobile runtimes like Executorch. Our methodology avoids specialized attention mechanisms and instead uses attention skipping for long-context acceleration. Our approach jointly optimizes model architecture (layers, dimensions) and attention pattern. To efficiently evaluate candidates, we treat each as a pruned version of a pretrained backbone with inherited weights, thereby achieving high accuracy with minimal continued pretraining. We leverage the low cost of latency evaluation in a staged process: learning an accurate latency model first, then searching for the Pareto-frontier across latency and quality. This yields MobileLLM-Flash, a family of foundation models (350M, 650M, 1.4B) for efficient on-device use with strong capabilities, supporting up to 8k context length. MobileLLM-Flash delivers up to 1.8x and 1.6x faster prefill and decode on mobile CPUs with comparable or superior quality. Our analysis of Pareto-frontier design choices offers actionable principles for OD-LLM design.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Knowledge | MMLU | Accuracy47.89 | 136 | |
| Social Commonsense Reasoning | SocialIQA | -- | 100 | |
| Open-domain Question Answering | Natural Questions (NQ) | Exact Match (EM)11.83 | 74 | |
| Coding | HumanEval | HumanEval Mean Score0.4634 | 32 | |
| Open-domain Question Answering | TriviaQA (TQA) | Accuracy0.3606 | 28 | |
| Science Question Answering | ARC Easy | Accuracy (Character-level)72.26 | 20 | |
| Coding | MBPP | Solve Rate35.6 | 15 | |
| Common Sense Reasoning | HellaSwag | Character Accuracy66.87 | 11 | |
| Physical Commonsense Reasoning | PIQA | Character-level Accuracy75.52 | 11 | |
| Reasoning | WinoGrande | Character-level Accuracy64.01 | 11 |