Sample Complexity of Model-Based Search

作者:

Highlights:

摘要

We consider the problem of searching a domain for points that have a desired property, in the special case where the objective function that determines the properties of points is unknown and must be learned during search. We give a parallel to PAC learning theory that is appropriate for reasoning about the sample complexity of this problem. The learner queries the true objective function at selected points, and uses this information to choose models of the objective function from a given hypothesis class that is known to contain a correct model. These models are used to focus the search on more promising areas of the domain. The goal is to find a point with the desired property in a small number of queries. We define an analog to VC dimension, needle dimension, to be the size of the largest sample in which any single point could have the desired property without the other points' values revealing this information. We give an upper bound on sample complexity that is linear in needle dimension for a natural type of search protocol and a linear lower bound for a class of constrained problems. We also describe the relationship between needle dimension and VC dimension, explore connections between model-based search and active concept learning (including several novel positive results in active learning), and consider a scale-sensitive version of needle dimension. Several simple examples illustrate the dependence of needle dimension on features of search problems.

论文关键词:

论文评审过程:Received 22 September 1998, Revised 22 March 1999, Available online 25 May 2002.

论文官网地址:https://doi.org/10.1006/jcss.1999.1644