Making sense of raw input

作者:

摘要

How should a machine intelligence perform unsupervised structure discovery over streams of sensory input? One approach to this problem is to cast it as an apperception task [1]. Here, the task is to construct an explicit interpretable theory that both explains the sensory sequence and also satisfies a set of unity conditions, designed to ensure that the constituents of the theory are connected in a relational structure.

论文关键词:Interpretable AI,Unsupervised theory learning,Neuro-symbolic integration

论文评审过程:Received 17 December 2019, Revised 14 December 2020, Accepted 28 April 2021, Available online 3 May 2021, Version of Record 6 May 2021.

论文官网地址:https://doi.org/10.1016/j.artint.2021.103521