A Note on Batch and Incremental Learnability

作者:

Highlights:

摘要

According to Gold's criterion of identification in the limit, a learner, presented with data about a concept, is allowed to make a finite number of incorrect hypotheses before converging to a correct hypothesis. If, on the other hand, the learner is allowed to make only one conjecture which has to be correct, the resulting criterion of success is known as finite identification  Identification in the limit may be viewed as an idealized model for incremental learning whereas finite identification may be viewed as an idealized model for batch learning. The present paper establishes a surprising fact that the collections of recursively enumerable languages that can be finite identified (batch learned in the ideal case) from both positive and negative data can also be identified in the limit (incrementally learned in the ideal case) from only positive data.  It is often difficult to extract insights about practical learning systems from abstract theorems in inductive inference. However, this result may be seen as carrying a moral for the design of learning systems, as it yields, in theidealcase of no inaccuracies, an algorithm for converting batch systems that learn from both positive and negative data into incremental systems that learn from only positive data without any loss in learning power. This is achieved by the incremental system simulating the batch system in incremental fashion and using the heuristic of “localized closed-world assumption” to generate negative data.

论文关键词:inductive inference,computational learning theory,identification in the limit,finite identification,incremental learning,batch learning

论文评审过程:Received 5 September 1996, Revised 19 December 1997, Available online 25 May 2002.

论文官网地址:https://doi.org/10.1006/jcss.1998.1568