Social-R1: Towards Human-like Social Reasoning in LLMs
About
While large language models demonstrate remarkable capabilities across numerous domains, social intelligence - the capacity to perceive social cues, infer mental states, and generate appropriate responses - remains a critical challenge, particularly for enabling effective human-AI collaboration and developing AI that truly serves human needs. Current models often rely on superficial patterns rather than genuine social reasoning. We argue that cultivating human-like social intelligence requires training with challenging cases that resist shortcut solutions. To this end, we introduce ToMBench-Hard, an adversarial benchmark designed to provide hard training examples for social reasoning. Building on this, we propose Social-R1, a reinforcement learning framework that aligns model reasoning with human cognition through multi-dimensional rewards. Unlike outcome-based RL, Social-R1 supervises the entire reasoning process, enforcing structural alignment, logical integrity, and information density. Results show that our approach enables a 4B parameter model to surpass much larger counterparts and generalize robustly across eight diverse benchmarks. These findings demonstrate that challenging training cases with trajectory-level alignment offer a path toward efficient and reliable social intelligence.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Social Commonsense Reasoning | SocialIQA | Accuracy77.74 | 100 | |
| Social Reasoning | SimpleToM | Accuracy96.75 | 29 | |
| Social Reasoning | ToMBench Hard (val) | Accuracy62.79 | 26 | |
| Social Reasoning | Hi-ToM | Accuracy70.83 | 26 | |
| Social Reasoning | MotiveBench | Accuracy88.89 | 26 | |
| Social Reasoning | EmoBench | Accuracy70.1 | 26 | |
| Social Reasoning | ToMBench | Accuracy68.81 | 26 | |
| Social Reasoning | TactfulToM | Accuracy50.79 | 26 |