Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Syntactic Control of Language Models by Posterior Inference

About

Controlling the syntactic structure of text generated by language models is valuable for applications requiring clarity, stylistic consistency, or interpretability, yet it remains a challenging task. In this paper, we argue that sampling algorithms based on the posterior inference can effectively enforce a target constituency structure during generation. Our approach combines sequential Monte Carlo, which estimates the posterior distribution by sampling from a proposal distribution, with a syntactic tagger that ensures that each generated token aligns with the desired syntactic structure. Our experiments with GPT2 and Llama3-8B models show that with an appropriate proposal distribution, we can improve syntactic accuracy, increasing the F1 score from $12.31$ (GPT2-large) and $35.33$ (Llama3-8B) to about $93$ in both cases without compromising the language model's fluency. These results underscore both the complexity of syntactic control and the effectiveness of sampling algorithms, offering a promising approach for applications where precise control over syntax is essential.

Vicky Xefteri, Tim Vieira, Ryan Cotterell, Afra Amini• 2025

Related benchmarks

TaskDatasetResultRank
Controlled Text GenerationSyntactic Control (Q = p) (test)
Log Probability p(y)-22.71
12
Controlled Text GenerationSyntactic Control (Q ∝ pq) (test)
Log Probability Q(y)1.00e-4
12
Showing 2 of 2 rows

Other info

Code

Follow for update