Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sloth: scaling laws for LLM skills to predict multi-benchmark performance across families

About

Scaling laws for large language models (LLMs) predict model performance based on parameters like size and training data. However, differences in training configurations and data processing across model families lead to significant variations in benchmark performance, making it difficult for a single scaling law to generalize across all LLMs. On the other hand, training family-specific scaling laws requires training models of varying sizes for every family. In this work, we propose Skills Scaling Laws (SSLaws, pronounced as Sloth), a novel scaling law that leverages publicly available benchmark data and assumes LLM performance is driven by low-dimensional latent skills, such as reasoning and instruction following. These latent skills are influenced by computational resources like model size and training tokens, but with varying efficiencies across model families. Sloth exploits correlations across benchmarks to provide more accurate and interpretable predictions while alleviating the need to train multiple LLMs per family. We present both theoretical results on parameter identification and empirical evaluations on 12 prominent benchmarks, from Open LLM Leaderboard v1/v2, demonstrating that Sloth predicts LLM performance accurately and offers insights into scaling behaviors for complex downstream tasks, increased test-time compute, and compute-optimal scaling of skills.

Felipe Maia Polo, Seamus Somerstep, Leshem Choshen, Yuekai Sun, Mikhail Yurochkin• 2024

Related benchmarks

TaskDatasetResultRank
Large Model Performance PredictionLarge Model Performance Prediction 60% masking
RMSE11.6
10
Large Model Performance PredictionOpenCompass 95% masking September 30, 2024 cutoff (temporal split)
RMSE13.01
10
Large Model Performance PredictionLarge Model Performance Prediction dataset 1.0 (40% masking)
RMSE11.66
10
Performance PredictionLarge Model Performance Prediction Dataset 80% masking (test)
RMSE11.98
10
Showing 4 of 4 rows

Other info

Follow for update