Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MobileLLM-Flash: Latency-Guided On-Device LLM Design for Industry Scale Deployment

About

Real-time AI experiences call for on-device large language models (OD-LLMs) optimized for efficient deployment on resource-constrained hardware. The most useful OD-LLMs produce near-real-time responses and exhibit broad hardware compatibility, maximizing user reach. We present a methodology for designing such models using hardware-in-the-loop architecture search under mobile latency constraints. This system is amenable to industry-scale deployment: it generates models deployable without custom kernels and compatible with standard mobile runtimes like Executorch. Our methodology avoids specialized attention mechanisms and instead uses attention skipping for long-context acceleration. Our approach jointly optimizes model architecture (layers, dimensions) and attention pattern. To efficiently evaluate candidates, we treat each as a pruned version of a pretrained backbone with inherited weights, thereby achieving high accuracy with minimal continued pretraining. We leverage the low cost of latency evaluation in a staged process: learning an accurate latency model first, then searching for the Pareto-frontier across latency and quality. This yields MobileLLM-Flash, a family of foundation models (350M, 650M, 1.4B) for efficient on-device use with strong capabilities, supporting up to 8k context length. MobileLLM-Flash delivers up to 1.8x and 1.6x faster prefill and decode on mobile CPUs with comparable or superior quality. Our analysis of Pareto-frontier design choices offers actionable principles for OD-LLM design.

Hanxian Huang, Igor Fedorov, Andrey Gromov, Bernard Beckerman, Naveen Suda, David Eriksson, Maximilian Balandat, Rylan Conway, Patrick Huber, Chinnadhurai Sankar, Ayushi Dalmia, Zechun Liu, Lemeng Wu, Tarek Elgamal, Adithya Sagar, Vikas Chandra, Raghuraman Krishnamoorthi• 2026

Related benchmarks

TaskDatasetResultRank
KnowledgeMMLU
Accuracy47.89
136
Social Commonsense ReasoningSocialIQA--
100
Open-domain Question AnsweringNatural Questions (NQ)
Exact Match (EM)11.83
74
CodingHumanEval
HumanEval Mean Score0.4634
32
Open-domain Question AnsweringTriviaQA (TQA)
Accuracy0.3606
28
Science Question AnsweringARC Easy
Accuracy (Character-level)72.26
20
CodingMBPP
Solve Rate35.6
15
Common Sense ReasoningHellaSwag
Character Accuracy66.87
11
Physical Commonsense ReasoningPIQA
Character-level Accuracy75.52
11
ReasoningWinoGrande
Character-level Accuracy64.01
11
Showing 10 of 15 rows

Other info

Follow for update