Interpretable neural networks based on continuous-valued logic and multicriteria decision operators

作者:

Highlights:

摘要

Combining neural networks with continuous logic and multicriteria decision-making tools can reduce the black-box nature of neural models. In this study, we show that nilpotent logical systems offer an appropriate mathematical framework for hybridization of continuous nilpotent logic and neural models, helping to improve the interpretability and safety of machine learning. In our concept, perceptrons model soft inequalities; namely membership functions and continuous logical operators. We design the network architecture before training, using continuous logical operators and multicriteria decision tools with given weights working in the hidden layers. Designing the structure appropriately leads to a drastic reduction in the number of parameters to be learned. The theoretical basis offers a straightforward choice of activation functions (the cutting function or its differentiable approximation, the squashing function), and also suggests an explanation to the great success of the rectified linear unit (ReLU). In this study, we focus on the architecture of a hybrid model and introduce the building blocks for future applications in deep neural networks.

论文关键词:Explainable artificial intelligence,Continuous logic,Nilpotent logic,Neural network,Adversarial problems

论文评审过程:Received 9 February 2020, Revised 20 April 2020, Accepted 23 April 2020, Available online 28 April 2020, Version of Record 30 April 2020.

论文官网地址:https://doi.org/10.1016/j.knosys.2020.105972