Online learning for auction mechanism in bandit setting

作者:

Highlights:

• We develop a new bandit problem, called armed bandit with shared information.

• We propose two UCB-SI algorithms to handle the proposed problem.

• We show for finite number of arms, the regret of UCB-SI1 is better than UCB1.

• We show UCB-SI2 can handle complex reward function in continuum-armed setting.

摘要

This paper is concerned with online learning of the optimal auction mechanism for sponsored search in a bandit setting. Previous works take the click-through rates of ads to be fixed and known to the search engine and use this information to design optimal auction mechanism. However, the assumption is not practical since ads can only receive clicks when they are shown to users. To tackle this problem, we propose to use online learning for auction mechanism design. To be specific, this task corresponds to a new type of bandit problem, which we call the armed bandit problem with shared information (AB-SI). In the AB-SI problem, the arm space (corresponding to the parameter space of the auction mechanism which can be discrete or continuous) is partitioned into a finite number of clusters (corresponding to the finite number of rankings of the ads), and the arms in the same cluster share the explored information (i.e., the click-through rates of the ads in the same ranked list) when any arm from the cluster is pulled. We propose two upper-confidence-bound algorithms called UCB-SI1 and UCB-SI2 to tackle this new problem in discrete-armed bandit and continuum-armed bandit setting respectively. We show that when the total number of arms is finite, the regret bound obtained by UCB-SI1 algorithm is tighter than the classical UCB1 algorithm. In the continuum-armed bandit setting, our proposed UCB-SI2 algorithm can handle a larger classes of reward function and achieve a regret bound of O(T2/3(dlnT)1/3), where d is the pseudo dimension for the real-valued reward function class. Experimental results show that the proposed algorithms can significantly outperform several classical online learning methods on synthetic data.

论文关键词:Armed bandit problem,Mechanism design,Online advertising

论文评审过程:Received 7 November 2012, Revised 11 June 2013, Accepted 12 July 2013, Available online 26 July 2013.

论文官网地址:https://doi.org/10.1016/j.dss.2013.07.004