CHEHAB RL: Learning to Optimize Fully Homomorphic Encryption Computations
About
Fully Homomorphic Encryption (FHE) enables computations directly on encrypted data, but its high computational cost remains a significant barrier. Writing efficient FHE code is a complex task requiring cryptographic expertise, and finding the optimal sequence of program transformations is often intractable. In this paper, we propose CHEHAB RL, a novel framework that leverages deep reinforcement learning (RL) to automate FHE code optimization. Instead of relying on predefined heuristics or combinatorial search, our method trains an RL agent to learn an effective policy for applying a sequence of rewriting rules to automatically vectorize scalar FHE code while reducing instruction latency and noise growth. The proposed approach supports the optimization of both structured and unstructured code. To train the agent, we synthesize a diverse dataset of computations using a large language model (LLM). We integrate our proposed approach into the CHEHAB FHE compiler and evaluate it on a suite of benchmarks, comparing its performance against Coyote, a state-of-the-art vectorizing FHE compiler. The results show that our approach generates code that is $5.3\times$ faster in execution, accumulates $2.54\times$ less noise, while the compilation process itself is $27.9\times$ faster than Coyote (geometric means).
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Box Blur | Box Blur 3x3, 4x4, 5x5 | Circuit Depth6 | 4 | |
| Decision Tree Evaluation | Decision Tree Evaluation Tree 50-50, 100-50, 100-100 variants | Circuit Depth12 | 3 | |
| Dot Product | Dot Product (4, 8, 16, 32) | Circuit Depth11 | 2 | |
| Linear regression | Linear Regression (4, 8, 16, 32) | Circuit Depth3 | 2 | |
| Matrix Multiplication | Matrix Multiplication 3x3, 4x4, 5x5 | Circuit Depth4 | 2 |