Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Gaperon: A Peppered English-French Generative Language Model Suite

About

We release Gaperon, a fully open suite of French-English-coding language models designed to advance transparency and reproducibility in large-scale model training. The Gaperon family includes 1.5B, 8B, and 24B parameter models trained on 2-4 trillion tokens, released with all elements of the training pipeline: French and English datasets filtered with a neural quality classifier, an efficient data curation and training framework, and hundreds of intermediate checkpoints. Through this work, we study how data filtering and contamination interact to shape both benchmark and generative performance. We find that filtering for linguistic quality enhances text fluency and coherence but yields subpar benchmark results, and that late deliberate contamination -- continuing training on data mixes that include test sets -- recovers competitive scores while only reasonably harming generation quality. We discuss how usual neural filtering can unintentionally amplify benchmark leakage. To support further research, we also introduce harmless data poisoning during pretraining, providing a realistic testbed for safety studies. By openly releasing all models, datasets, code, and checkpoints, Gaperon establishes a reproducible foundation for exploring the trade-offs between data curation, evaluation, safety, and openness in multilingual language model development.

Nathan Godey, Wissam Antoun, Rian Touchent, Rachel Bawden, \'Eric de la Clergerie, Beno\^it Sagot, Djam\'e Seddah• 2025

Related benchmarks

TaskDatasetResultRank
Generative Question AnsweringBolmo Evaluation Suite GenQA 7B
GenQA Average65.3
29
Code GenerationOlmoBaseEval Code BigCodeBench, HumanEval, DeepSeek LeetCode, DS 1000, MBPP, MultiPL
OlmoBaseEval Code Score19.4
24
General Capability Evaluation (Held-out Benchmarks)OlmoBaseEval LBPP BBH MMLU Pro MC Deepmind Math (HeldOut)
LBPP Score4.7
24
Mathematical ReasoningOlmoBaseEval Math (GSM8k, GSM Symbolic, MATH)
Math Aggregate Score20.7
24
Multiple Choice Non-STEM Question AnsweringOlmoBaseEval MC Non-STEM (MMLU Humanities/Social Sci, CSQA, PiQA, SocialIQA, CoQA, DROP, Jeopardy, NaturalQs, SQuAD)
Aggregate Score65
24
Multiple-choice Question AnsweringBolmo Evaluation Suite MC STEM 7B
MC STEM Average Accuracy58
17
Multiple Choice STEM Question AnsweringOlmoBaseEval MCSTEM
MCSTEM Score56.2
12
Showing 7 of 7 rows

Other info

Follow for update