Gated Tree Cross-attention for Checkpoint-Compatible Syntax Injection in Decoder-Only LLMs
About
Decoder-only large language models achieve strong broad performance but are brittle to minor grammatical perturbations, undermining reliability for downstream reasoning. However, directly injecting explicit syntactic structure into an existing checkpoint can interfere with its pretrained competence. We introduce a checkpoint-compatible gated tree cross-attention (GTCA) branch that reads precomputed constituency chunk memory while leaving backbone architecture unchanged. Our design uses a token update mask and staged training to control the scope and timing of structural updates. Across benchmarks and Transformer backbones, GTCA strengthens syntactic robustness beyond continued-training baselines without compromising Multiple-Choice QA performance or commonsense reasoning, providing a practical checkpoint-compatible route to more syntax-robust decoder-only LLMs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Commonsense Reasoning | HellaSwag | Accuracy64.85 | 1460 | |
| Commonsense Reasoning | WinoGrande | Accuracy77.89 | 231 | |
| Multiple-choice Question Answering | MMLU | Accuracy71.02 | 148 | |
| Syntax | BLiMP | Accuracy84.61 | 8 | |
| Multiple-Choice QA | Cloth | Accuracy83.98 | 8 | |
| Syntax | COLA | MCC56.69 | 8 |