ISSN : 1229-067X
The objective of the present article is to explain principles of estimation and assessment for statistical models in psychological research. The principles have indeed been actively discussed over the past few decades in the field of mathematical and quantitative psychology. The essence of the discussion is as follows: 1) candidate models are to be considered not the true model but approximating models, 2) discrepancy between a candidate model and the true model will not disappear even in the population, and therefore 3) it would be best to select the approximating model exhibiting the smallest discrepancy with the true model. The discrepancy between the true model and a candidate model estimated in the sample has been referred to as overall discrepancy in quantitative psychology. In the field of machine learning, models are assessed in light of the extent to which performance of a model is generalizable to the new unseen samples, without being limited to the training samples. In machine learning, a model’s ability to generalize is referred to as the generalization error or prediction error. The present article elucidates the point that the principle of model assessment based on overall discrepancy advocated in quantitative psychology is identical to the model assessment principle based on generalization/prediction error firmly adopted in machine learning. Another objective of the present article is to help readers appreciate the fact that questionable data analytic practices widely tolerated in psychology, such as HARKing (Kerr, 1998) and QRP (Simmons et al., 2011), have been likely causes of the problem known as overfitting in individual studies, which in turn, have collectively resulted in the recent debates over replication crisis in psychology. As a remedy against the questionable practices, this article reintroduces cross-validation methods, whose initial discussion dates back at least to the 1950s in psychology (Mosier, 1951), by couching them in terms of estimators of the generalization/prediction error in the hope of reducing the overfitting problems in psychological research.
Human history has been the process of expanding human control over environment. The critical factor from psychological viewpoint is the emergence of the selfhood owing to the capacity for self-consciousness and language. The Western self, armed with the political philosophies of individualism and liberalism, redefined the nature of humanity and provided incessant source of power to energize its quest of expanding control over environment in undeterred manner. The development of technology has got rid of most restraints which defined the humanity and posed a pivotal moment to proceed beyond the humanity. Facing the 4th industrial revolution, humanity experiences the age of selfhood being overly inflated. Two symptoms are noted as exposing such symptoms: the obsession with happiness and the increase of narcissism across nations. To understand this inflated self and the proposals to deflate it, we review briefly two approaches-- self actualization and self transcendence-- and relate them to mature personality. We review some alternative conceptions arising to quiet the self such as self-expansion, selflessness, and the interactive self. We propose a new perspective of the interactive selfhood(Chok self), couched in the Korean worldview, is a promising alternative to cure the overly inflated self.
AI bias is not only an issue of humanities and social impact and governance, but also of systemic robustness. The algorithm bias has the characteristic of being intervened in the system construction process as the computer becomes an artificial neural network-based autonomous intelligence system. The objective of this paper is to deal with the aspects of bias that are involved in each stage of artificial intelligence, the fairness criterion for the judgment of bias, and the bias mitigation methods. Different types of fairness are difficult to satisfy simultaneously and require different combinations of criteria and factors depending on the field and context of AI application. Each method for mitigating the bias of training data, classifiers, and prediction alone do not completely block the bias, and a balance between bias mitigation and accuracy should be sought. Even if bias is identified through unlimited access to the algorithm through AI auditing, it is difficult to determine whether the algorithm is biased. The bias mitigation technology goes beyond simply removing the bias, and is moving toward solving the problem of both reducing the bias and securing the robustness of the system, and adjusting the various types of fairness. In conclusion, these characteristics imply that policies and education that recognize AI biases and seek solutions should be explored in terms of bias recognition and coordination based on system understanding beyond recognizing issues at the conceptual level.
This study examined that the effect of user characteristics on the acceptability of artificial intelligence technology. More specifically, the effects of user perception, psychological characteristics, and demographic characteristics on the acceptability of artificial intelligence technology were examined. According to the results, in the case of artificial intelligence acceptability, the effect of user perception was found to be more important than others. In particular, it was found that the performance expectancy of artificial intelligence devices or services and acceptance of artificial intelligence were closely related. For anxiety about artificial intelligence, openness and anthropomorphism among user perception was found to be more important than the demographic characteristics of users. In the case of product use intention, like acceptability, user perception was found to have the greatest influence, and hedonic motivation and social influence were found to be important among user perception. Finally, the implications of our findings and suggestions for future research were discussed.
The purpose of this study was to discuss the role and task of career counseling and vocational psychology in the fourth industrial revolution era. We first addressed how the world of work has changed in the fourth industrial revolution era and how these changes have impacted individuals and the society. Second, we explored issues to consider in order to improve individuals’ lives and the society utilizing contemporary concepts and theories of career counseling and vocational psychology. Specifically, we reviewed boundaryless career attitudes and protean career attitudes, constructivist career theories and meaningful work perspectives, and the psychology of working framework. Lastly, we reviewed the under-addressed aspect in the existing discourse and proposed themes and tasks that career counseling and vocational psychology needs to further attend. We specifically discussed that the disciplines need to attend influences of organizational and social contexts on individuals’ career development, examine multifaceted aspects of self-development including its potential risk, and find strategies to help individuals experiences the higher sense of meaning and purpose in work in the changing world of work. We also discussed that it is important for the discipline to contribute to improving structural social inequality through taking a more diverse and inclusive view of work and promoting a more integrative perspective on life and work by expanding the definition of work.
Computerized adaptive testing (CAT) is a computer-administered test where the next question for estimating the examinee’s trait level is selected depending on his or her reponses to the previous items, resulting in tailored testing for each individual examinee. A defining feature of CAT stems from its item selection algorithms, among which both research interest and practical applications of decision-tree based CAT (DT-based CAT) have been rising recently. In the field of machine learning, however, it is well known that decision-trees, as a form of predictive models with simple and interpretable tree structures, can be vulnerable to the problem of overfitting or the problem of creating overly complex trees that do not generalize to newly observed data. Among various ensemble techniques developed to adequately address this problem, we the authors paid attention to the Alternating Model Tree (AMT) due to its interpretable tree-like structure. The purpose of this article is to investigate the viability of the Alternating Model Tree (AMT) as an item selection algorithm for constructing CAT. To this end, we first presented a detailed exposition of how AMT-based CAT can be constructed and then compared its performance with DT-based CAT using two sets of publicly available psychological test scores. The results provided supportive evidence that AMT-based CAT is viable, and that AMT-based CAT can predict test scores at least as accurate as DT-based CAT does. Based on our findings, we discuss implications, limitations, and directions of future studies.
The Bayesian estimation method has recently received a lot of attention in the social sciences. The Bayesian method has a special factor of prior distribution that can reflect researchers’ background knowledge in the estimation process. The specification of the prior distribution affects the overall estimation. Despite prior distribution being the most important factor in Bayesian analysis, there is a lack of methodological research for understanding and appropriately specifying the prior distribution. Therefore, the present study tries to help researchers to apply the prior distribution to their estimation by addressing the importance of the prior distribution and the overall content of the prior specification. First, we explore the method that researchers do not directly specify the prior distribution. This method means selecting the default prior distribution automatically provided by the program, and if you want to use this option, you must know exactly what kind of the default prior distribution is actually provided. For this, we discuss the default priors of frequently used programs, as well as the known problem of the default priors. Second, we address the method that researchers do specify the prior distribution by themseleves. The prior distributions that can be directly specified include noninformative prior distributions and informative prior distributions. Which prior distribution to use is determined by the presence of prior information on parameters. This study deals with the necessity of noninformative prior distributions and the proposed method when specifying them, provides studies that can be referenced when specifying informative prior distributions, and explores criteria that can be referenced for the select of informativeness by synthesizing the criteria across many studies. We provide practical help through data examples applying the methods discussed in the text, and finally discuss the significance and limitations of the present study.
The purpose of this study is to develop the Motivation Balance Scale and Balance Index that can measure the motivational balancing theory (Shin, 2017). To this end, previous studies related to motivation and open questionnaire analysis were conducted to derive the components of motivation and develop preliminary questions. The components of motivation were empirically confirmed in a preliminary survey (n=353) of university students nationwide, and a validation analysis was performed in the main survey (n=464). As a result of the study, the Motivation Balance Scale consists of 4 factors (autonomy, competence, belongingness, and a sense of goals), and each sub-factor consists of 4 items, for a total of 16 items. In addition, based on previous studies, an index was developed that can check the degree of balance between the sub-factors of the Motivation Balance Scale. Finally, the implications of the Motivation Balance Scale and Balance Index developed in this study were discussed.
The current study was conducted to i) introduce the method of successive intervals, a stimulus-centered scaling that has not received much attention in psychology, ii) show that it can be applied to measure the blameworthiness of behavior, and iii) demonstrate the scaled behaviors could be applied to measure an individual’s psychological property. The authors classified psychometric methods and introduced several stimulus-centered scaling methods. The usefulness of the method of successive intervals was discussed by comparing those stimulus-centered scaling methods. In order to find the scale value of the stimulus and the boundary of the response category, the method of successive intervals determines the relative positions of all stimuli and response categories based on the proportion of raters who responded to each response category to individual stimuli. In Study 1, a list of 33 morally justifiable behaviors was constructed from existing studies. Then, scale values of the blameworthiness of the behaviors were calculated (N=500). As a result, the scale values of behaviors prohibited by law were higher than others. On the other hand, the scale values of behaviors that could not be punished but could be perceived as bad were relatively low. In Study 2, the representation of innocence (representation of ‘innocent’), meaning people’s psychological representation of ‘not blameworthy’ was measured (N=108) based on the scale values obtained in Study 1. The relationship between the representation of ‘innocent’ measured by the list of behaviors and related variables had a theoretically predictive direction. These results show that the scale values of stimulus based on the method of successive intervals can be applied to measure individual’s psychological attribute related to the stimulus. This study is expected to allow researchers to consider various scaling methods by showing the applicability of the method of successive intervals that have not been used frequently by researchers in psychology.