Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving Large Language Models in Event Relation Logical Prediction

About

Event relations are crucial for narrative understanding and reasoning. Governed by nuanced logic, event relation extraction (ERE) is a challenging task that demands thorough semantic understanding and rigorous logical reasoning. In this paper, we conduct an in-depth investigation to systematically explore the capability of LLMs in understanding and applying event relation logic. More in detail, we first investigate the deficiencies of LLMs in logical reasoning across different tasks. Our study reveals that LLMs are not logically consistent reasoners, which results in their suboptimal performance on tasks that need rigorous reasoning. To address this, we explore three different approaches to endow LLMs with event relation logic, and thus enable them to generate more coherent answers across various scenarios. Based on our approach, we also contribute a synthesized dataset (LLM-ERL) involving high-order reasoning for evaluation and fine-tuning. Extensive quantitative and qualitative analyses on different tasks also validate the effectiveness of our approaches and provide insights for solving practical tasks with LLMs in future work. Codes are available at https://github.com/chenmeiqii/Teach-LLM-LR.

Meiqi Chen, Yubo Ma, Kaitao Song, Yixin Cao, Yan Zhang, Dongsheng Li• 2023

Related benchmarks

TaskDatasetResultRank
Logical reasoningFOLIO
Accuracy48
119
Event relation extractionMAVEN-ERE 1.0 (test)
Micro F126.4
44
Event relation extractionCausal-TimeBank 1.0 (test)
Micro F113.3
43
Logical reasoningProofWriter
Accuracy44
24
Showing 4 of 4 rows

Other info

Code

Follow for update