Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ADO: Automatic Data Optimization for Inputs in LLM Prompts

About

This study explores a novel approach to enhance the performance of Large Language Models (LLMs) through the optimization of input data within prompts. While previous research has primarily focused on refining instruction components and augmenting input data with in-context examples, our work investigates the potential benefits of optimizing the input data itself. We introduce a two-pronged strategy for input data optimization: content engineering and structural reformulation. Content engineering involves imputing missing values, removing irrelevant attributes, and enriching profiles by generating additional information inferred from existing attributes. Subsequent to content engineering, structural reformulation is applied to optimize the presentation of the modified content to LLMs, given their sensitivity to input format. Our findings suggest that these optimizations can significantly improve the performance of LLMs in various tasks, offering a promising avenue for future research in prompt engineering. The source code is available at https://anonymous.4open.science/r/ADO-6BC5/

Sam Lin, Wenyue Hua, Lingyao Li, Zhenting Wang, Yongfeng Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical Reasoninggsm
Accuracy75.5
35
Fraud DetectionFD
Balanced Acc69.9
12
Job ClassificationJob
Accuracy64.3
12
Question AnsweringQA
Accuracy59.5
12
RecommendationAmazon Books
Hit Rate @ 100.198
12
RecommendationAT (Amazon Toys)
Hit@1021.3
12
RecommendationAE (Amazon Electronics)
Hit@1025.3
12
RecommendationCI Amazon Clothing
Hit@1085.3
12
RecommendationHD Amazon Home
Hit@100.722
12
Showing 9 of 9 rows

Other info

Code

Follow for update