바로가기메뉴

본문 바로가기 주메뉴 바로가기

Korean Journal of Psychology: General

Principles and methods for model assessment in psychological research in the era of big-data and machine learning

Korean Journal of Psychology: General / Korean Journal of Psychology: General, (P)1229-067X; (E)2734-1127
2021, v.40 no.4, pp.389-413
https://doi.org/10.22257/kjp.2021.12.40.4.389

  • Downloaded
  • Viewed

Abstract

The objective of the present article is to explain principles of estimation and assessment for statistical models in psychological research. The principles have indeed been actively discussed over the past few decades in the field of mathematical and quantitative psychology. The essence of the discussion is as follows: 1) candidate models are to be considered not the true model but approximating models, 2) discrepancy between a candidate model and the true model will not disappear even in the population, and therefore 3) it would be best to select the approximating model exhibiting the smallest discrepancy with the true model. The discrepancy between the true model and a candidate model estimated in the sample has been referred to as overall discrepancy in quantitative psychology. In the field of machine learning, models are assessed in light of the extent to which performance of a model is generalizable to the new unseen samples, without being limited to the training samples. In machine learning, a model’s ability to generalize is referred to as the generalization error or prediction error. The present article elucidates the point that the principle of model assessment based on overall discrepancy advocated in quantitative psychology is identical to the model assessment principle based on generalization/prediction error firmly adopted in machine learning. Another objective of the present article is to help readers appreciate the fact that questionable data analytic practices widely tolerated in psychology, such as HARKing (Kerr, 1998) and QRP (Simmons et al., 2011), have been likely causes of the problem known as overfitting in individual studies, which in turn, have collectively resulted in the recent debates over replication crisis in psychology. As a remedy against the questionable practices, this article reintroduces cross-validation methods, whose initial discussion dates back at least to the 1950s in psychology (Mosier, 1951), by couching them in terms of estimators of the generalization/prediction error in the hope of reducing the overfitting problems in psychological research.

keywords
overfitting, generalization error, training error, cross-validation, bias-variance tradeoff, 과적합, 일반화 오차, 훈련 오차, 교차-타당성 입증법, 편향-분산 균형

Korean Journal of Psychology: General