A holistic approach to interpretability in financial lending: Models, visualizations, and summary-explanations

作者:

Highlights:

• Introduction of a DSS for financial lending that includes several AI approaches that form a holistic approach. A novel globally interpretable two-layer additive risk model, which lends naturally to sparsity, decomposability, visualization, case-based reasoning, feature importance, and monotonicity constraints.

• An interactive visualization tool for the model and its local explanations.

• An application of our approach to finance, indicating that black-box models may not be necessary in the case of credit-risk assessment.

摘要

Lending decisions are usually made with proprietary models that provide minimally acceptable explanations to users. In a future world without such secrecy, what decision support tools would one want to use for justified lending decisions? This question is timely, since the economy has dramatically shifted due to a pandemic, and a massive number of new loans will be necessary in the short term. We propose a framework for such decisions, including a globally interpretable machine learning model, an interactive visualization of it, and several types of summaries and explanations for any given decision. The machine learning model is a two-layer additive risk model, which resembles a two-layer neural network, but is decomposable into subscales. In this model, each node in the first (hidden) layer represents a meaningful subscale model, and all of the nonlinearities are transparent. Our online visualization tool allows exploration of this model, showing precisely how it came to its conclusion. We provide three types of explanations that are simpler than, but consistent with, the global model: case-based reasoning explanations that use neighboring past cases, a set of features that were the most important for the model's prediction, and summary-explanations that provide a customized sparse explanation for any particular lending decision made by the model. Our framework earned the FICO recognition award for the Explainable Machine Learning Challenge, which was the first public challenge in the domain of explainable machine learning.1

论文关键词:Interpretable machine learning,Globally consistent explanations,Additive models,Lending,Finance

论文评审过程:Received 9 January 2021, Revised 26 May 2021, Accepted 7 July 2021, Available online 15 July 2021, Version of Record 21 November 2021.

论文官网地址:https://doi.org/10.1016/j.dss.2021.113647