Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CoPE: A Small Language Model for Steerable and Scalable Content Labeling

About

This paper details the methodology behind CoPE, a policy-steerable small language model capable of fast and accurate content labeling. We present a novel training curricula called Contradictory Example Training that enables the model to learn policy interpretation rather than mere policy memorization. We also present a novel method for generating content policies, called Binocular Labeling, which enables rapid construction of unambiguous training datasets. When evaluated across seven different harm areas, CoPE exhibits equal or superior accuracy to frontier models at only 1% of their size. We openly release a 9 billion parameter version of the model that can be run on a single consumer-grade GPU. Models like CoPE represent a paradigm shift for classifier systems. By turning an ML task into a policy writing task, CoPE opens up new design possibilities for the governance of online platforms.

Samidh Chakrabarti, David Willner, Kevin Klyman, Tiffany Saade, Emily Capstick, Sabina Nong• 2025

Related benchmarks

TaskDatasetResultRank
Hate speech classificationInternal Set
Precision93
8
Self-Harm Content ClassificationInternal Set
Precision83
5
Sexual Content ClassificationInternal Set
Precision96
5
Hate speech classificationEthos Benchmark
Precision80
5
Harassment ClassificationInternal Set
Precision0.69
4
Showing 5 of 5 rows

Other info

Follow for update