ISSN : 1226-9654
Facial expressions are visual information that reflects our inner states and plays an important role in how people communicate with others. Recently, facial expressions have been actively used through emoticons via online communication modes such as chatting on social media. The current study investigated the underlying mechanism of the perception of facial expressions in emoticons. There are two hypotheses regarding the processing of facial expressions. First, the feature-based processing hypothesis suggests facial expressions are perceived based on information gathered from each facial feature such as eyes, nose, and mouth, all of which are processed independently. Second, according to the holistic processing hypothesis, facial expressions are perceived with configural information such as the distance between the eyes. For the perception of facial expressions in a real human face, most studies have supported the holistic processing hypothesis. However, the results of studies for facial expressions in emoticons are inconsistent. The purpose of the current study is to explore whether the underlying mechanism of the facial expression perception in emoticons can be influenced by the elaboration level, related to how facial features are described in detail. In the experiment, we employed two types of emoticons to manipulate the elaboration level: simple and elaborated. In a simple emoticon, all facial features are represented with a few line segments, whereas in an elaborated emoticon, the features are elaborately described at the drawing level. Participants were asked to perform an emotion recognition task in which a human face or an emoticon with one of the five basic facial expressions (anger, fear, happiness, sadness, and surprise) was presented in an aligned or misaligned state. As a result, whereas in the case of the elaborated emoticons the accuracy of facial expression recognition was higher in the aligned condition than in the misaligned condition, there was not significant differences between these two conditions in the case of the simple emoticons. It suggests that the higher the level of elaboration of features in emoticons, the stronger the effect of the holistic processing.
김선진 (2014). 모바일 메신저의 이모티콘 특성에 관한 비교 연구. 디지털디자인학연구, 14(1), 87-96.
정찬섭, 오경자, 이일병, 변혜란 (1998). 감성 인터페이스: 얼굴 표정의 인식 및 합성 모형. 1998, 14(1), 87-96.
정찬섭, 오경자, 이일병, 변혜란 (1998). 감성 인터페이스: 얼굴 표정의 인식 및 합성 모형. 1998년도 한국심리학회 동계 연구 세미나 논문집, 121-160.
Bombari, D., Schmid, P. C., Schmid Mast, M., Birri, S., Mast, F. W., & Lobmaier, J. S. (2013). Emotion recognition: The role of featural and configural face information. The Quarterly Journal of Experimental Psychology, 66(12), 2426-2442.
Brainard, D. H. (1997). The Psychophysics Toolbox. Spatial Pelli, D. G. (1997). The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spatial Vision, 10(4), 437–442.
Piepers, D., & Robbins, R. (2012). A review and clarification of the terms “holistic,” “configural,” and “relational” in the face perception literature. Frontiers in Psychology, 3, 559.
Prazak, E. R., & Burgund, E. D. (2014). Keeping it real:Recognizing expressions in real compared to schematic faces. Visual Cognition, 22(5), 737-750.
Vuilleumier, P., Armony, J. L., Driver, J., & Dolan, R. J. (2003). Distinct spatial frequency sensitivities for processing faces and emotional expressions. Nature Neuroscience, 6(6), 624-631.
Yin, R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81(1), 141-145. Vision, 10(4), 433–436.
Calder, A. J., Young, A. W., Keane, J., & Dean, M. (2000). Configural information in facial expression perception. Journal of Experimental Psychology: Human Perception and Performance, 26(2), 527-551.
Ekman, P. (1982). Methods for measuring facial action. In K. R. Scherer & P. Ekman (Eds.). Handbook if methods in nonverbal behavior research. Cambridge: Cambridge University Press.
Fallshore, M., & Bartholow, J. (2003). Recognition of emotion from inverted schematic drawings of faces. Perceptual and Motor Skills, 96, 236-244.
Koffka, K. (1935). Principles of Gestalt psychology. London:Kegan, Paul, Trench & Trubner.
Leppänen, J. M., & Hietanen, J. K. (2004). Positive facial expressions are recognized faster than negative facial expressions, but why?. Psychological Research, 69, 22-29.
Lipp, O. V., Price, S. M., & Tellegen, C. L. (2009). No effect of inversion on attentional and affective processing of facial expressions. Emotion, 9(2), 248
Marković, S., & Gvozdenovi, V. (2001). Symmetry, complexity and perceptual economy: Effects of minimum and maximum simplicity conditions. Visual Cognition, 8(3-5), 305-327.
McKelvie, S. J. (1995). Emotional expression in upside‐down faces: Evidence for configurational and componential processing. British Journal of Social Psychology, 34(3), 325-334.
Öhman, A., Lundqvist, D., & Esteves, F. (2001). The face in the crowd revisited: a threat advantage with schematic stimuli. Journal of Personality and Social Psychology, 80(3), 381.