바로가기메뉴

본문 바로가기 주메뉴 바로가기

Korean Journal of Psychology: General

Fairness Criteria and Mitigation of AI Bias

Korean Journal of Psychology: General / Korean Journal of Psychology: General, (P)1229-067X; (E)2734-1127
2021, v.40 no.4, pp.459-485
https://doi.org/10.22257/kjp.2021.12.40.4.459

Abstract

AI bias is not only an issue of humanities and social impact and governance, but also of systemic robustness. The algorithm bias has the characteristic of being intervened in the system construction process as the computer becomes an artificial neural network-based autonomous intelligence system. The objective of this paper is to deal with the aspects of bias that are involved in each stage of artificial intelligence, the fairness criterion for the judgment of bias, and the bias mitigation methods. Different types of fairness are difficult to satisfy simultaneously and require different combinations of criteria and factors depending on the field and context of AI application. Each method for mitigating the bias of training data, classifiers, and prediction alone do not completely block the bias, and a balance between bias mitigation and accuracy should be sought. Even if bias is identified through unlimited access to the algorithm through AI auditing, it is difficult to determine whether the algorithm is biased. The bias mitigation technology goes beyond simply removing the bias, and is moving toward solving the problem of both reducing the bias and securing the robustness of the system, and adjusting the various types of fairness. In conclusion, these characteristics imply that policies and education that recognize AI biases and seek solutions should be explored in terms of bias recognition and coordination based on system understanding beyond recognizing issues at the conceptual level.

keywords
기계학습, 편향, 공정성, 데이터, 알고리즘, machine learning, bias, fairness, data, algorithm

Korean Journal of Psychology: General