Incorporating FAT and privacy aware AI modeling approaches into business decision making frameworks

作者:

Highlights:

• Formal approach to AI systems including Fairness, Accountability and Transparency.

• Explainable AI model for data driven decisions and policy.

• Tested on affinity predictions, general framework applies to a variety of scenarios.

摘要

We present a formal approach to build and evaluate AI systems that include principles of Fairness, Accountability and Transparency (FAT), which are extremely important in various domains where AI models are used, yet their utilization in business settings is scant. We develop and instantiate a FAT-based framework with a privacy-constrained dataset and build a model to demonstrate the balance among these 3 dimensions. These principles are gaining prominence with higher awareness of privacy and fairness in business and society. Our results indicate that FAT can co-exist in a well-designed system. Our contribution lies in presenting and evaluating a functional, FAT-based machine learning model in an affinity prediction scenario. Contrary to common belief, we show that explainable AI/ML systems need not have a major negative impact on predictive performance. Our approach is applicable in a variety of fields such as insurance, health diagnostics, government funds allocation and other business settings. Our work has broad policy implications as well, by making AI and AI-based decisions more ethical, less controversial, and hence, trustworthy. Our work contributes to emerging AI policy perspectives worldwide.

论文关键词:Explainable AI,Fairness,Transparency,Accountability,Responsible computing

论文评审过程:Received 20 June 2021, Revised 21 December 2021, Accepted 21 December 2021, Available online 28 December 2021, Version of Record 21 February 2022.

论文官网地址:https://doi.org/10.1016/j.dss.2021.113715