Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

When Forgetting Builds Reliability: LLM Unlearning for Reliable Hardware Code Generation

About

Large Language Models (LLMs) have shown strong potential in accelerating digital hardware design through automated code generation. Yet, ensuring their reliability remains a critical challenge, as existing LLMs trained on massive heterogeneous datasets often exhibit problematic memorization of proprietary intellectual property (IP), contaminated benchmarks, and unsafe coding patterns. To mitigate these risks, we propose a novel unlearning framework tailored for LLM-based hardware code generation. Our method combines (i) a syntax-preserving unlearning strategy that safeguards the structural integrity of hardware code during forgetting, and (ii) a fine-grained floor-aware selective loss that enables precise and efficient removal of problematic knowledge. This integration achieves effective unlearning without degrading LLM code generation capabilities. Extensive experiments show that our framework supports forget sets up to 3x larger, typically requiring only a single training epoch, while preserving both syntactic correctness and functional integrity of register-transfer level (RTL) codes. Our work paves an avenue towards reliable LLM-assisted hardware design.

Yiwen Liang, Qiufeng Li, Shikai Wang, Weidong Cao• 2025

Related benchmarks

TaskDatasetResultRank
UnlearningSyntax-preserving dataset (forget set)
PrivLeak0.55
40
RTL generationVerilogEval 156 cases (test)
Pass@10.38
32
RTL code generationRtllm (test)
Pass@142
16
RTL code generationSyntax-preserving dataset (val)
Loss0.28
7
Code GenerationUnseen simple code generation tasks (test)
Loss0.29
3
Showing 5 of 5 rows

Other info

Follow for update