Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LAB: Large-Scale Alignment for ChatBots

About

This work introduces LAB (Large-scale Alignment for chatBots), a novel methodology designed to overcome the scalability challenges in the instruction-tuning phase of large language model (LLM) training. Leveraging a taxonomy-guided synthetic data generation process and a multi-phase tuning framework, LAB significantly reduces reliance on expensive human annotations and proprietary models like GPT-4. We demonstrate that LAB-trained models can achieve competitive performance across several benchmarks compared to models trained with traditional human-annotated or GPT-4 generated synthetic data. Thus offering a scalable, cost-effective solution for enhancing LLM capabilities and instruction-following behaviors without the drawbacks of catastrophic forgetting, marking a step forward in the efficient training of LLMs for a wide range of applications.

Shivchander Sudalairaj, Abhishek Bhandwaldar, Aldo Pareja, Kai Xu, David D. Cox, Akash Srivastava• 2024

Related benchmarks

TaskDatasetResultRank
Language Modeling and ReasoningOpen LLM Leaderboard
ARC81.6
33
Instruction FollowingAlpacaEval GPT-4 (test)
AlpacaEval Win Rate (GPT-4)17.1
6
Showing 2 of 2 rows

Other info

Follow for update