Using temporal-difference learning for multi-agent bargaining

作者:

Highlights:

摘要

This research treats a bargaining process as a Markov decision process, in which a bargaining agent’s goal is to learn the optimal policy that maximizes the total rewards it receives over the process. Reinforcement learning is an effective method for agents to learn how to determine actions for any time steps in a Markov decision process. Temporal-difference (TD) learning is a fundamental method for solving the reinforcement learning problem, and it can tackle the temporal credit assignment problem. This research designs agents that apply TD-based reinforcement learning to deal with online bilateral bargaining with incomplete information. This research further evaluates the agents’ bargaining performance in terms of the average payoff and settlement rate. The results show that agents using TD-based reinforcement learning are able to achieve good bargaining performance. This learning approach is sufficiently robust and convenient, hence it is suitable for online automated bargaining in electronic commerce.

论文关键词:Markov decision process,Reinforcement learning,Temporal-difference learning,Risk-attitude,Online bargaining

论文评审过程:Received 5 September 2004, Revised 18 April 2006, Accepted 19 April 2007, Available online 24 April 2007.

论文官网地址:https://doi.org/10.1016/j.elerap.2007.04.001