Incremental learning with partial instance memory

作者:

摘要

Agents that learn on-line with partial instance memory reserve some of the previously encountered examples for use in future training episodes. In earlier work, we selected extreme examples—those from the boundaries of induced concept descriptions—combined these with incoming instances, and used a batch learning algorithms to generate new concept descriptions. In this paper, we extend this work by combining our method for selecting extreme examples with two incremental learning algorithms, aq11 and gem. Using these new systems, aq11-pm and gem-pm, and using two real-world applications, those of computer intrusion detection and blasting cap detection in X-ray images, we conducted a lesion study to analyze the trade-offs between predictive accuracy, examples held in memory, learning time, and concept complexity. Empirical results showed that although the use of our partial-memory model did decrease predictive accuracy when compared to systems that learn from all available training data, it also decreased memory requirements, decreased learning time, and in some cases, decreased concept complexity. We also present results from an experiment using the stagger Concepts, a synthetic data set involving concept drift, suggesting that our methods perform comparably to the flora2 system in terms of predictive accuracy, but store fewer examples. Moreover, these outcomes are consistent with earlier results using our partial-memory model and batch learning.

论文关键词:On-line concept learning,Incremental learning,Partial instance memory,Concept drift

论文评审过程:Received 18 April 2002, Revised 14 April 2003, Available online 23 September 2003.

论文官网地址:https://doi.org/10.1016/j.artint.2003.04.001