Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MPCFormer: fast, performant and private Transformer inference with MPC

About

Enabling private inference is crucial for many cloud inference services that are based on Transformer models. However, existing private inference solutions can increase the inference latency by more than 60x or significantly compromise the inference quality. In this paper, we design the framework MPCFORMER as a practical solution, using Secure Multi-Party Computation (MPC) and Knowledge Distillation (KD). Through extensive evaluations, we show that MPCFORMER significantly speeds up Transformer inference in MPC settings while achieving similar ML performance to the input model. On the IMDb dataset, it achieves similar performance to BERTBASE, while being 5.3x faster. On the GLUE benchmark, it achieves 97% performance of BERTBASE with a 2.2x speedup. MPCFORMER remains effective with different trained Transformer weights such as ROBERTABASE and larger models including BERTLarge. Code is available at https://github.com/MccRee177/MPCFormer.

Dacheng Li, Rulin Shao, Hongyi Wang, Han Guo, Eric P. Xing, Hao Zhang• 2022

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (test)
QNLI Accuracy90.6
12
Private InferenceGPT2-base (124M)
Embed Inference Time (s)316.8
7
Private text generationGPT2-base (124M)
Usage Fraction98.34
7
Private text generationT5 138M
Memory Fraction95.87
7
Private InferenceT5 138M
Embed Inference Time (s)324.8
7
Text GenerationMultiWoz NLG (test)
BERTScore0.9287
6
Text GenerationCommonGen (test)
BERTScore0.8943
6
Text GenerationDailyDialog (test)
BERTscore0.8161
6
Privacy-Preserving InferenceBERT Base (inference)
GeLU Time (s)0.351
4
Privacy-Preserving InferenceBERT Large (inference)
GeLU Time (s)0.351
4
Showing 10 of 10 rows

Other info

Follow for update