Improving fairness of artificial intelligence algorithms in Privileged-Group Selection Bias data settings

作者:

Highlights:

• We study fairness of AI algorithms in Privileged Group Selection Bias data settings.

• A typical domain which often presents such selection bias is AI-based hiring.

• We demonstrate that such selection bias can indeed lead to a high algorithmic bias.

• We propose three in-process and pre-process fairness mechanisms.

• Our methods improve fairness considerably with a minimal compromise in accuracy.

摘要

•We study fairness of AI algorithms in Privileged Group Selection Bias data settings.•A typical domain which often presents such selection bias is AI-based hiring.•We demonstrate that such selection bias can indeed lead to a high algorithmic bias.•We propose three in-process and pre-process fairness mechanisms.•Our methods improve fairness considerably with a minimal compromise in accuracy.

论文关键词:Algorithmic bias,Algorithmic fairness,Fairness-aware machine learning,Semi-supervised learning,Selection bias

论文评审过程:Received 30 April 2020, Revised 12 April 2021, Accepted 23 July 2021, Available online 31 July 2021, Version of Record 4 August 2021.

论文官网地址:https://doi.org/10.1016/j.eswa.2021.115667