Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Deep Kernel Fusion for Transformers

About

Agentic LLM inference with long contexts is increasingly limited by memory bandwidth rather than compute. In this setting, SwiGLU MLP blocks, whose large weights exceed cache capacity, become a major yet under-optimized bottleneck. We propose DeepFusionKernel, a deeply fused kernel that cuts HBM traffic and boosts cache reuse, delivering up to 13.2% speedup on H100 and 9.7% on A100 over SGLang. Integrated with SGLang and paired with a kernel scheduler, DeepFusionKernel ensures consistent accelerations over generation lengths, while remaining adaptable to diverse models, inference configurations, and hardware platforms.

Zixi Zhang, Zhiwen Mo, Yiren Zhao, Robert Mullins• 2026

Related benchmarks

TaskDatasetResultRank
LLM DecodingLlama 70B 3.1
Throughput3.12e+3
48
LLM DecodingLlama 70B (H100 GPU Cluster) 3.1
Throughput894.3
27
DecodingLlama 70B 3.1 (inference)
Throughput1.41e+3
21
Showing 3 of 3 rows

Other info

Follow for update