Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive Principles

About

Assessing the effectiveness of large language models (LLMs) in performing different tasks is crucial for understanding their strengths and weaknesses. This paper presents Hierarchical Prompting Taxonomy (HPT), grounded on human cognitive principles and designed to assess LLMs by examining the cognitive demands of various tasks. The HPT utilizes the Hierarchical Prompting Framework (HPF), which structures five unique prompting strategies in a hierarchical order based on their cognitive requirement on LLMs when compared to human mental capabilities. It assesses the complexity of tasks with the Hierarchical Prompting Index (HPI), which demonstrates the cognitive competencies of LLMs across diverse datasets and offers insights into the cognitive demands that datasets place on different LLMs. This approach enables a comprehensive evaluation of an LLMs problem solving abilities and the intricacy of a dataset, offering a standardized metric for task complexity. Extensive experiments with multiple datasets and LLMs show that HPF enhances LLM performance by 2% to 63% compared to baseline performance, with GSM8k being the most cognitively complex task among reasoning and coding tasks with an average HPI of 3.20 confirming the effectiveness of HPT. To support future research and reproducibility in this domain, the implementations of HPT and HPF are available here.

Devichand Budagam, Ashutosh Kumar, Mahsa Khoshnoodi, Sankalp KJ, Vinija Jain, Aman Chadha• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy86.2
751
Code GenerationHumanEval (test)
Pass@1100
444
Multitask Language UnderstandingMMLU (test)
Accuracy83.31
303
Question AnsweringBoolQ--
240
Commonsense Question AnsweringCSQA (test)
Accuracy0.8476
127
Question AnsweringBoolQ (test)
Accuracy91.752
46
Commonsense Question AnsweringCSQA--
44
Machine TranslationIWSLT
BLEU0.24
31
Meeting SummarizationSamSum
HPI6.4347
22
Machine TranslationIWSLT (test)--
19
Showing 10 of 12 rows

Other info

Code

Follow for update