Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning and Enforcing Context-Sensitive Control for LLMs

About

Controlling the output of Large Language Models (LLMs) through context-sensitive constraints has emerged as a promising approach to overcome the limitations of Context-Free Grammars (CFGs) in guaranteeing generation validity. However, such constraints typically require manual specification -- a significant barrier demanding specialized expertise. We introduce a framework that automatically learns context-sensitive constraints from LLM interactions through a two-phase process: syntactic exploration to gather diverse outputs for constraint learning, followed by constraint exploitation to enforce these learned rules during generation. Experiments demonstrate that our method enables even small LLMs (1B parameters) to learn and generate with perfect constraint adherence, outperforming larger counterparts and state-of-the-art reasoning models. This work represents the first integration of context-sensitive grammar learning with LLM generation, eliminating manual specification while maintaining generation validity.

Mohammad Albinhassan, Pranava Madhyastha, Mark Law, Alessandra Russo• 2026

Related benchmarks

TaskDatasetResultRank
Synthetic Grammar SynthesisSynthetic Grammar Synthesis a^n b^n c^n
Accuracy100
29
Grammar SynthesisL2 a^n b^n c^m
Accuracy100
11
Showing 2 of 2 rows

Other info

Follow for update