Impact of techniques to reduce error in high error rule-based expert system gradient descent networks

作者:Jeremy Straub

摘要

Machine learning systems offer the key capability to learn about their operating environment from the data that they are supplied. They can learn via supervised and unsupervised training, from system results during operations, or both. However, while machine learning systems can identify solutions to problems and questions, in many cases they cannot explain how they arrived at them. Moreover, they cannot guarantee that they have not relied upon confounding variables and other non-causal relationships. In some circumstances, learned behaviors may violate legal or ethical principles such as rules regarding non-discrimination. In these and other cases, learned associations that are true in many – but not all – cases may result in critical system failures when processing exceptions to the learned behaviors. A machine learning system, which applies gradient descent to expert system networks, has been proposed as a solution to this. The expert system foundation means that the system can only learn across valid pathways, while the machine learning capabilities facilitate optimization via training and operational learning. While the initial results of this approach are promising, cases where networks were optimized into high error states (and for which continued optimization continued to increase the error level) were noted. This paper proposes and evaluates multiple techniques to handle these high error networks and improve system performance, in these cases.

论文关键词:Expert system, Error reduction, Machine learning, Gradient descent, Training, Backpropagation

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10844-021-00672-7