iTAG: Inverse Design for Natural Text Generation with Accurate Causal Graph Annotations
About
A fundamental obstacle to causal discovery from text is the lack of causally annotated text data for use as ground truth, due to high annotation costs. This motivates an important task of generating text with causal graph annotations. Early template-based generation methods sacrifice text naturalness in exchange for high causal graph annotation accuracy. Recent Large Language Model (LLM)-dependent methods directly generate natural text from target graphs through LLMs, but do not guarantee causal graph annotation accuracy. Therefore, we propose iTAG, which performs real-world concept assignment to nodes before converting causal graphs into text in existing LLM-dependent methods. iTAG frames this process as an inverse problem with the causal graph as the target, iteratively examining and refining concept selection through Chain-of-Thought (CoT) reasoning so that the induced relations between concepts are as consistent as possible with the target causal relationships described by the causal graph. iTAG demonstrates both extremely high annotation accuracy and naturalness across extensive tests, and the results of testing text-based causal discovery algorithms with the generated data show high statistical correlation with real-world data. This suggests that iTAG-generated data can serve as a practical surrogate for scalable benchmarking of text-based causal discovery algorithms.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Causal Discovery | iTAG-generated corpora | F1G90.8 | 72 | |
| Causal Discovery | Real-world corpora | F1G81.5 | 72 | |
| Annotation Accuracy | Experiment 1 (test) | F1 Score (Ga)96 | 40 | |
| Annotation Accuracy | DeepSeek-R1 Experiment 1 | F1 Score (Ga)97 | 40 | |
| Causal graph annotation accuracy | Synthetic causal graphs Phase 1 generator | F1 Score (Graph)98 | 40 | |
| AI-generated text detection | Qwen3-235B-A22B-Thinking-2507 generated text business medical legal (test) | BERT Score55 | 5 | |
| Text Detectability | GPT-5-pro Experiment 2 averaged over n ∈ {3, ..., 10} 2025-10-06 | BERT Score54 | 5 | |
| Text Detectability | claude-opus balanced 4-1 (test) | fastText F1_D52 | 5 | |
| Text Naturalness Evaluation | DeepSeek-R1 Experiment 2 | BERT Score0.56 | 5 | |
| Agreement analysis of causal discovery scores | iTAG-generated vs. real-world corpora 24 algorithm-size pairs n=3–10 | Pearson Correlation Coefficient0.928 | 4 |