Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

iTAG: Inverse Design for Natural Text Generation with Accurate Causal Graph Annotations

About

A fundamental obstacle to causal discovery from text is the lack of causally annotated text data for use as ground truth, due to high annotation costs. This motivates an important task of generating text with causal graph annotations. Early template-based generation methods sacrifice text naturalness in exchange for high causal graph annotation accuracy. Recent Large Language Model (LLM)-dependent methods directly generate natural text from target graphs through LLMs, but do not guarantee causal graph annotation accuracy. Therefore, we propose iTAG, which performs real-world concept assignment to nodes before converting causal graphs into text in existing LLM-dependent methods. iTAG frames this process as an inverse problem with the causal graph as the target, iteratively examining and refining concept selection through Chain-of-Thought (CoT) reasoning so that the induced relations between concepts are as consistent as possible with the target causal relationships described by the causal graph. iTAG demonstrates both extremely high annotation accuracy and naturalness across extensive tests, and the results of testing text-based causal discovery algorithms with the generated data show high statistical correlation with real-world data. This suggests that iTAG-generated data can serve as a practical surrogate for scalable benchmarking of text-based causal discovery algorithms.

Wenshuo Wang, Boyu Cao, Nan Zhuang, Wei Li• 2026

Related benchmarks

TaskDatasetResultRank
Causal DiscoveryiTAG-generated corpora
F1G90.8
72
Causal DiscoveryReal-world corpora
F1G81.5
72
Annotation AccuracyExperiment 1 (test)
F1 Score (Ga)96
40
Annotation AccuracyDeepSeek-R1 Experiment 1
F1 Score (Ga)97
40
Causal graph annotation accuracySynthetic causal graphs Phase 1 generator
F1 Score (Graph)98
40
AI-generated text detectionQwen3-235B-A22B-Thinking-2507 generated text business medical legal (test)
BERT Score55
5
Text DetectabilityGPT-5-pro Experiment 2 averaged over n ∈ {3, ..., 10} 2025-10-06
BERT Score54
5
Text Detectabilityclaude-opus balanced 4-1 (test)
fastText F1_D52
5
Text Naturalness EvaluationDeepSeek-R1 Experiment 2
BERT Score0.56
5
Agreement analysis of causal discovery scoresiTAG-generated vs. real-world corpora 24 algorithm-size pairs n=3–10
Pearson Correlation Coefficient0.928
4
Showing 10 of 13 rows

Other info

Follow for update