Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GSS: Gated Subspace Steering for Selective Memorization Mitigation in LLMs

About

Large language models (LLMs) can memorize and reproduce training sequences verbatim -- a tendency that undermines both generalization and privacy. Existing mitigation methods apply interventions uniformly, degrading performance on the majority of tokens that generalize normally. We show empirically that memorization is sparse, intermittent, and token-conditioned, suggesting that effective mitigation requires context-aware intervention rather than static parameter modification. To this end, we propose a novel and effective selective memorization mitigation method -- Gated Subspace Steering (GSS), which decomposes intervention into a probe (detecting memorization-relevant activations) and a steer (applying targeted correction only when the probe exceeds a threshold). The optimal probe-steer pair emerges from a principled optimization framework based on optimal subspace steering. Experiments on four benchmarks show GSS matches or exceeds state-of-the-art memorization reduction while requiring $100-1000 \times$ less compute than optimization-based alternatives. Furthermore, we provide new theoretical insights into the geometry of memorization in neural representations.

Xuanqi Zhang, Haoyang Shang, Xiaoxiao Li• 2026

Related benchmarks

TaskDatasetResultRank
Memorization ReductionGSM8K
Memorization Reduction (%)35.2
20
Memorization ReductionUltraChat
Memorization Reduction51.4
20
Memorization mitigationPythia 2.8B
Memory Usage (%)6.93
9
Memorization mitigationPythia 6.9B
Memory Usage (%)6.96
9
Model EditingSanitation
Locality0.2015
8
Showing 5 of 5 rows

Other info

Follow for update