ISSN : 1229-067X
AI bias is not only an issue of humanities and social impact and governance, but also of systemic robustness. The algorithm bias has the characteristic of being intervened in the system construction process as the computer becomes an artificial neural network-based autonomous intelligence system. The objective of this paper is to deal with the aspects of bias that are involved in each stage of artificial intelligence, the fairness criterion for the judgment of bias, and the bias mitigation methods. Different types of fairness are difficult to satisfy simultaneously and require different combinations of criteria and factors depending on the field and context of AI application. Each method for mitigating the bias of training data, classifiers, and prediction alone do not completely block the bias, and a balance between bias mitigation and accuracy should be sought. Even if bias is identified through unlimited access to the algorithm through AI auditing, it is difficult to determine whether the algorithm is biased. The bias mitigation technology goes beyond simply removing the bias, and is moving toward solving the problem of both reducing the bias and securing the robustness of the system, and adjusting the various types of fairness. In conclusion, these characteristics imply that policies and education that recognize AI biases and seek solutions should be explored in terms of bias recognition and coordination based on system understanding beyond recognizing issues at the conceptual level.