Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

EffGen: Enabling Small Language Models as Capable Autonomous Agents

About

Most existing language model agentic systems today are built and optimized for large language models (e.g., GPT, Claude, Gemini) via API calls. While powerful, this approach faces several limitations including high token costs and privacy concerns for sensitive applications. We introduce effGen, an open-source agentic framework optimized for small language models (SLMs) that enables effective, efficient, and secure local deployment (pip install effgen). effGen makes four major contributions: (1) Enhanced tool-calling with prompt optimization that compresses contexts by 70-80% while preserving task semantics, (2) Intelligent task decomposition that breaks complex queries into parallel or sequential subtasks based on dependencies, (3) Complexity-based routing using five factors to make smart pre-execution decisions, and (4) Unified memory system combining short-term, long-term, and vector-based storage. Additionally, effGen unifies multiple agent protocols (MCP, A2A, ACP) for cross-protocol communication. Results on 13 benchmarks show effGen outperforms LangChain, AutoGen, and Smolagents with higher success rates, faster execution, and lower memory. Our results reveal that prompt optimization and complexity routing have complementary scaling behavior: optimization benefits SLMs more (11.2% gain at 1.5B vs 2.4% at 32B), while routing benefits large models more (3.6% at 1.5B vs 7.9% at 32B), providing consistent gains across all scales when combined. effGen (https://effgen.org/) is released under the MIT License, ensuring broad accessibility for research and commercial use. Our framework code is publicly available at https://github.com/ctrl-gaurav/effGen.

Gaurav Srivastava, Aafiya Hussain, Chi Wang, Yingyan Celine Lin, Xuan Wang• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy (GSM8K)95.75
358
Mathematical ReasoningMATH 500
Accuracy75.4
119
Agentic EvaluationGAIA
Accuracy28.12
50
Agentic EvaluationSimpleQA
Accuracy84
50
Agentic BenchmarksGAIA
Execution Time (min)1.6
25
Efficiency AverageConsolidated Benchmarks All
Avg Execution Time (min)11.2
25
Math ReasoningBeyondBench Easy
Accuracy96.67
25
Math ReasoningBeyondBench Hard
Accuracy58.86
25
Math Reasoning (coding tools)BeyondBench Easy
Execution Time (min)3.4
25
Mathematical Reasoning (Calculator)GSM8K
Accuracy94.11
25
Showing 10 of 23 rows

Other info

Follow for update