LimGen: Probing the LLMs for Generating Suggestive Limitations of Research Papers
About
Examining limitations is a crucial step in the scholarly research reviewing process, revealing aspects where a study might lack decisiveness or require enhancement. This aids readers in considering broader implications for further research. In this article, we present a novel and challenging task of Suggestive Limitation Generation (SLG) for research papers. We compile a dataset called \textbf{\textit{LimGen}}, encompassing 4068 research papers and their associated limitations from the ACL anthology. We investigate several approaches to harness large language models (LLMs) for producing suggestive limitations, by thoroughly examining the related challenges, practical insights, and potential opportunities. Our LimGen dataset and code can be accessed at \url{https://github.com/arbmf/LimGen}.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Review Feedback Generation | RMR-75K (val) | Pairwise Win Rate43.8 | 72 | |
| Scientific rebuttal generation | Scientific Rebuttal Evaluation dataset (test) | BLEU@410.9 | 9 | |
| Scientific Review Feedback Generation | ICLR Human Evaluation 2025 (test) | Actionability3.14 | 9 | |
| Scientific Review Feedback Generation | ICLR LLM-as-a-Judge 2025 (test) | Actionability Score3.08 | 9 |