Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Language steering in latent space to mitigate unintended code-switching

About

Multilingual Large Language Models (LLMs) often exhibit hallucinations such as unintended code-switching, reducing reliability in downstream tasks. We propose latent-space language steering, a lightweight inference-time method that identifies language directions via Principal Component Analysis (PCA) on parallel translations and steers token embeddings along these axes to control language identity. Our approach mitigates code-switching while preserving semantics with negligible computational overhead and requires only minimal parallel data for calibration. Empirically, we achieve 95-99\% language classification accuracy using a single principal component and reduce next-token distributional divergence by up to 55\% across multiple language pairs on Qwen2.5 and Llama-3.2 models. Generation-based evaluation on Llama-3.2 further demonstrates 63--99\% reduction in Code-Switching Index across four language pairs ($p < 0.001$). We further analyze the layer-wise evolution of language representations, revealing that language identity concentrates in final layers with near-perfect linear separability.

Andrey Goncharov, Nikolai Kondusov, Alexey Zaytsev• 2025

Related benchmarks

TaskDatasetResultRank
Code-switching mitigationTED Talks English-Russian Llama-3.2-1B, n=500 (test)
CSI1
3
Code-switching mitigationTED Talks English-Chinese Llama-3.2-1B, n=500 (test)
CSI3
3
Code-switching mitigationTED Talks English-Spanish Llama-3.2-1B, n=500 (test)
CSI23
3
Code-switching mitigationTED Talks English-Hindi Llama-3.2-1B n=500 (test)
CSI4
3
Showing 4 of 4 rows

Other info

Follow for update