Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Can Safety Emerge from Weak Supervision? A Systematic Analysis of Small Language Models

About

Safety alignment is critical for deploying large language models (LLMs) in real-world applications, yet most existing approaches rely on large human-annotated datasets and static red-teaming benchmarks that are costly, difficult to scale, and slow to adapt to evolving model behaviors. Moreover, overly conservative safety mechanisms can reduce model usefulness by rejecting sensitive but legitimate queries. We introduce Self-MOA (Self Multi-Objective Alignment), a fully automated framework for aligning small language models using weak supervision from automated evaluator models. Self-MOA operates as a closed loop that dynamically generates model-specific red team prompts, constructs preference data from model-generated responses, and aligns models via multi-objective preference optimization to jointly optimize for safety and helpfulness. Across multiple small language models and safety benchmarks, Self-MOA achieves a 12.41\% improvement in safety while preserving helpfulness, using as little as 11 times less training data than human-supervised alignment baselines. These results demonstrate that adaptive, automated alignment can reduce the dependence on static, human-curated safety pipelines in resource-constrained settings.

Punyajoy Saha, Sudipta Halder, Debjyoti Mondal, Subhadarshi Panda• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy60.01
1891
Commonsense ReasoningWinoGrande
Accuracy74.03
372
Word PredictionLAMBADA
Accuracy65.46
148
Massive Multitask Language UnderstandingMMLU
Accuracy59.02
117
Helpfulness evaluationManual Evaluation Set
Average Helpfulness Score4.57
24
Safety EvaluationManual Evaluation Set
Average Safety Score3.83
12
Safety EvaluationManual Evaluation Safety Dataset
Average Safety Score3.83
12
Showing 7 of 7 rows

Other info

Follow for update