Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ShieldedCode: Learning Robust Representations for Virtual Machine Protected Code

About

Large language models (LLMs) have achieved remarkable progress in code generation, yet their potential for software protection remains largely untapped. Reverse engineering continues to threaten software security, while traditional virtual machine protection (VMP) relies on rigid, rule-based transformations that are costly to design and vulnerable to automated analysis. In this work, we present the first protection-aware framework that learns robust representations of VMP-protected code. Our approach builds large-scale paired datasets of source code and normalized VM implementations, and introduces hierarchical dependency modeling at intra-, preceding-, and inter-instruction levels. We jointly optimize language modeling with functionality-aware and protection-aware contrastive objectives to capture both semantic equivalence and protection strength. To further assess resilience, we propose a protection effectiveness optimization task that quantifies and ranks different VM variants derived from the same source. Coupled with a two-stage continual pre-training and fine-tuning pipeline, our method enables models to generate, compare, and reason over protected code. Extensive experiments show that our framework significantly improves robustness across diverse protection levels, opening a new research direction for learning-based software defense. In this work, we present ShieldedCode, the first protection-aware framework that learns robust representations of VMP-protected code. Our method achieves 26.95% Pass@1 on L0 VM code generation compared to 22.58% for GPT-4o., and improves binary similarity detection Recall@1 by 10% over state of art methods like jTrans.

Mingqiao Mo, Yunlong Tan, Hao Zhang, Heng Zhang, Yangfan He• 2026

Related benchmarks

TaskDatasetResultRank
Binary Code Similarity DetectionBinaryCorp-VirtualAssembly (test)
Recall@148.8
72
Code GenerationHumanEval compile (L0)
Pass@126.95
8
Code GenerationHumanEval compile (L1)
Pass@10.1847
8
Code GenerationHumanEval compile (L2)
Pass@119.23
8
Code GenerationHumanEval compile (L3)
Pass@114.71
8
Performance overhead evaluationCrypto Operations
Overhead Ratio26.4
4
Performance overhead evaluationCompression zlib
Overhead Ratio22.9
4
Performance overhead evaluationSPEC CPU geomean 2017
Performance Ratio (Geomean)38.7
4
Performance overhead evaluationDatabase SQLite
Performance Overhead19.1
4
Manual Reverse Engineering20 VMP-protected functions User study
Success Rate17
4
Showing 10 of 10 rows

Other info

Follow for update