Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Alignment at Pre-training! Towards Native Alignment for Arabic LLMs

About

The alignment of large language models (LLMs) is critical for developing effective and safe language models. Traditional approaches focus on aligning models during the instruction tuning or reinforcement learning stages, referred to in this paper as `post alignment'. We argue that alignment during the pre-training phase, which we term `native alignment', warrants investigation. Native alignment aims to prevent unaligned content from the beginning, rather than relying on post-hoc processing. This approach leverages extensively aligned pre-training data to enhance the effectiveness and usability of pre-trained models. Our study specifically explores the application of native alignment in the context of Arabic LLMs. We conduct comprehensive experiments and ablation studies to evaluate the impact of native alignment on model performance and alignment stability. Additionally, we release open-source Arabic LLMs that demonstrate state-of-the-art performance on various benchmarks, providing significant benefits to the Arabic LLM community.

Juhao Liang, Zhenyang Cai, Jianqing Zhu, Huang Huang, Kewei Zong, Bang An, Mosen Alharthi, Juncai He, Lian Zhang, Haizhou Li, Benyou Wang, Jinchao Xu• 2024

Related benchmarks

TaskDatasetResultRank
Multitask Language UnderstandingArabicMMLU
Accuracy66.56
16
Question AnsweringISLAMICFAITHQA
Accuracy (Arabic)23.1
15
Arabic Cultural Value AlignmentACVA all
Accuracy81.36
10
Multiple-choice Question AnsweringEXAMS
Accuracy55.49
10
Arabic Cultural Value AlignmentACVA clean
Accuracy82.58
10
Trustworthiness evaluationAraTrust
Accuracy63.41
8
Showing 6 of 6 rows

Other info

Follow for update