A self-organizing incremental neural network for continual supervised learning
作者:
Highlights:
•
摘要
Continual learning algorithms can adapt to changes of data distributions, new classes, and even completely new tasks without catastrophically forgetting previously acquired knowledge. Here, we present a novel self-organizing incremental neural network, GSOINN+, for continual supervised learning. GSOINN+ learns a topological mapping of the input data to an undirected network and uses a weighted nearest-neighbor rule with fractional distance for classification. GSOINN+ learns incrementally—new classification tasks do not need to be specified a priori, and no rehearsal of previously learned tasks with stored training sets is required. In a series of sequential learning experiments, we show that GSOINN+ can mitigate catastrophic forgetting, even when completely new tasks are to be learned.
论文关键词:Catastrophic forgetting,Concept drift,Continual learning,Incremental learning,Supervised learning
论文评审过程:Received 23 May 2020, Revised 1 April 2021, Accepted 23 July 2021, Available online 30 July 2021, Version of Record 4 August 2021.
论文官网地址:https://doi.org/10.1016/j.eswa.2021.115662