The Lob–Pass Problem

作者:

Highlights:

摘要

We consider a new variant of the online learning model in which the goal of an agent is to choose his or her actions so as to maximize the number of successes, while learning about his or her reacting environment through those very actions. In particular, we consider a model of tennis play, in which the only actions that the player can take are a pass and a lob, and the opponent is modeled by two linear (probabilistic) functions fL(r)=a1r+b1 and fP(r)=a2r+b2, specifying the probability that a lob (and a pass, respectively) will win a point when the proportion of lobs played in the past trials is r. We measure the performance of a player in this model by his or her expected regret, namely how many fewer points the player expects to win as compared to the ideal player (one that knows the two probabilistic functions) as a function of t, the total number of trials, which is unknown to the player a priori. Assuming that the probabilistic functions satisfy the “matching shoulders condition,” i.e., fL(0)=fP(1), we obtain a variety of upper bounds for assumptions and restrictions of varying degrees, ranging from O(log t), O(t1/2), O(t3/5), O(t2/3) to O(t5/7) as well as a matching lower bound of order Ω(log t) for the first case. When the total number of trials t is given to the player in advance, the upper bounds can be improved significantly. An extended abstract describing part of this work has appeared in N. Abe and J. Takeuchi, 1993, in “Proceedings of the Sixth Annual ACM Workshop on Computational Learning Theory,” pp. 422–428.

论文关键词:

论文评审过程:Received 14 June 1999, Revised 23 March 2000, Available online 25 May 2002.

论文官网地址:https://doi.org/10.1006/jcss.2000.1718