Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Measuring Bargaining Abilities of LLMs: A Benchmark and A Buyer-Enhancement Method

About

Bargaining is an important and unique part of negotiation between humans. As LLM-driven agents learn to negotiate and act like real humans, how to evaluate agents' bargaining abilities remains an open problem. For the first time, we formally described the Bargaining task as an asymmetric incomplete information game, defining the gains of the Buyer and Seller in multiple bargaining processes. It allows us to quantitatively assess an agent's performance in the Bargain task. We collected a real product price dataset, AmazonHistoryPrice, and conducted evaluations of various LLM agents' bargaining abilities. We find that playing a Buyer is much harder than a Seller, and increasing model size can not effectively improve the Buyer's performance. To address the challenge, we propose a novel approach called OG-Narrator that integrates a deterministic Offer Generator to control the price range of Buyer's offers, and an LLM Narrator to create natural language sentences for generated offers. Experimental results show that OG-Narrator improves the buyer's deal rates from 26.67% to 88.88% and brings a ten times multiplication of profits on all baselines, even a model that has not been aligned.

Tian Xia, Zhiwei He, Tong Ren, Yibo Miao, Zhuosheng Zhang, Yang Yang, Rui Wang• 2024

Related benchmarks

TaskDatasetResultRank
BargainingAmazonHistoryPrice ALL 1.0 (test)--
28
BargainingAmazonHistoryPrice CI 1.0 (test)--
28
BargainingAmazonHistoryPrice MI 1.0 (test)--
14
BargainingBargaining Benchmark 20 Products ALL--
4
BargainingBargaining Benchmark MI 20 Products--
4
BargainingBargaining Benchmark 20 Products CI--
4
Showing 6 of 6 rows

Other info

Code

Follow for update