Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AHP-Powered LLM Reasoning for Multi-Criteria Evaluation of Open-Ended Responses

About

Question answering (QA) tasks have been extensively studied in the field of natural language processing (NLP). Answers to open-ended questions are highly diverse and difficult to quantify, and cannot be simply evaluated as correct or incorrect, unlike close-ended questions with definitive answers. While large language models (LLMs) have demonstrated strong capabilities across various tasks, they exhibit relatively weaker performance in evaluating answers to open-ended questions. In this study, we propose a method that leverages LLMs and the analytic hierarchy process (AHP) to assess answers to open-ended questions. We utilized LLMs to generate multiple evaluation criteria for a question. Subsequently, answers were subjected to pairwise comparisons under each criterion with LLMs, and scores for each answer were calculated in the AHP. We conducted experiments on four datasets using both ChatGPT-3.5-turbo and GPT-4. Our results indicate that our approach more closely aligns with human judgment compared to the four baselines. Additionally, we explored the impact of the number of criteria, variations in models, and differences in datasets on the results.

Xiaotian Lu, Jiyi Li, Koh Takeuchi, Hisashi Kashima• 2024

Related benchmarks

TaskDatasetResultRank
Consistency EvaluationDecisionBench
CR Mean0.1693
9
Multi-Criteria Decision AnalysisIMDb M-Act
NDCG@581.7
3
Multi-Criteria Decision AnalysisIMDb M-Dra
NDCG@50.83
3
Multi-Criteria Decision AnalysisHotelRec H-Fam
NDCG@595
3
Multi-Criteria Decision AnalysisBeer-Advocate B-Ref
NDCG@594.8
3
Multi-Criteria Decision AnalysisHotelRec H-Bus
NDCG@596.8
3
Multi-Criteria Decision AnalysisBeer-Advocate B-Cplx
NDCG@594.8
3
Showing 7 of 7 rows

Other info

Follow for update