바로가기메뉴

본문 바로가기 주메뉴 바로가기

logo

Vol.30 No.2

초록보기
Abstract

Two experiments were directed at postural coordination in dart throwing. Darts can be thrown using only the elbow and wrist while keeping the rest of the body stationary. In order to introduce variability in the coordination pattern, distances to the target (Experiment 1), and the characteristics of support surface (Experiment 2) were varied. Dart throwing data were obtained using a wireless motion tracking system via sensors attached to the index finger, wrist, elbow, shoulder, hip, knee, and ankle of the right side (the throwing hand) with additional sensors attached to the head and the left shoulder, for a total of 9 sensors. Cross-correlations between joints (wrist-elbow, wrist-shoulder, wrist-hip, wrist-knee, wrist-ankle, elbow-shoulder, elbow-hip, elbow-knee, elbow-ankle, shoulder-hip, shoulder-knee, shoulder-ankle, hip-knee, hip-ankle, and knee-ankle) were used to construct coordination patterns. The standard deviations of the head and the right shoulder motion were used to assess body sway. In each condition of target distance (Experiment 1) and support surface (Experiment 2), participants threw darts 20 times, preceded by 20 practice throws. Different patterns of coordination arose as a function of target distance and support surface. Coupling strengths between joints were rearranged to cope with different demands imposed by different task constraints. Of particular interest was the finding that body sway was minimal in the narrow beam condition, less than in the wide plank or yoga mattress condition. Results suggest that the motor control system accomplishes a goal-directed movement by reassembling the multijoint kinematic chain dynamically under different task constraints.

초록보기
Abstract

Human visual system represents a statistical summary of a complex visual input to overcome its limited capacity. An example of summary representation is the average size perception, which has been known to be accurate and precise. In the current study, we developed and validated a computational model of mean size perception. In this model, we assumed that the visual system encodes individual sizes with early noise, then integrates the noisy size information from multiple inputs. Finally, the integrated size information is added by late noise. The suggested model was validated with a psychophysical experiment, in which the standard and the test displays included multiple circles with different sizes and observers were asked to report which display had larger mean size. The psychophysical data was well accounted by the model with late noise: threshold for mean size discrimination was decreased with set-size, but the decrement of threshold was decelerated in large set-sizes. The proposed model allows us to understand underlying mechanism of mean size perception, and the experimental paradigm used in the current study is expected to be a useful tool for studying ensemble perception of various visual properties.

초록보기
Abstract

Emoticons are widely used for various offline and online communication. To investigate whether the perceptual encoding of face emoticon relies on the face-specific configural processing, we examined how stimulus inversion affects the amplitude and peak latency of face-sensitive ERP component N170 which is known to be larger and delayed in response to inverted than to upright human face (N170-face inversion effect; N170-FIE) as well as ERP component P1 which is known to be sensitive to low-level visual features. ERPs were recorded to upright and inverted face emoticons, face photos, and house icons which were surrounded by oval-shaped outline. Participants had to judge the relative height of two gaps on the outline. N170 was enhanced for face emoticons and face photos relative to house icons (face-sensitive N170 effect), and showed no amplitude difference between face emoticons and face photos. N170 amplitude was not affected by inversion for all types of experimental stimuli. N170 was delayed for face photos relative to face emoticons as well as house icons, and showed no latency difference between face emoticons and house icons. The aforesaid latency difference among experimental stimuli were found only for inverted stimuli. For face emoticons and face photos, N170 was delayed for inverted relative to upright faces (N170-FIE), but, no N170 peak latency-related inversion effect was found for house icons. However, the magnitude of inversion effect was largest for face photos, and showed no difference between face emoticons and house icons. The amplitude and peak latency of P1 showed neither face-sensitive effect nor FIE, and only showed the effect of low-level visual differences among experimental stimuli. These findings show that perceptual encoding of upright face emoticons can rely on face-sensitive configural processing mechanisms to a less degree than face photos, but perceptual encoding of inverted face emoticons can rely on object-sensitive perceptual mechanisms like house icons.

초록보기
Abstract

Comprehension of physical events in terms of cause and effect is fundamental for making sense of and dealing successfully with changes in the dynamic physical world. Previous research has demonstrated that the causal structure of the world can, in some cases, be directly perceived: When two billiard balls collide, observers perceive that the action of one ball caused the other's motion, merging two motion events into a unitary percept. The current study explored whether such casual interpretations can contribute to resolving low-level ambiguities in motion perception. We used a bistable apparent motion display, a motion quartet, which can lead to the perception of either horizontal or vertical motion, and tested the effects of “context objects” which moved in such a way that motion targets appeared to collide with them in either horizontal or vertical dimension. Our results show that contextual motion implying a Michotte-style launch can strongly bias observed motion correspondence, consistent with physical regularities of mechanical causality in a postdictive way. It suggests that the perception of causality is an earlier and more pervasive phenomenon than previously understood, and in fact can influence the perception of motion itself.

초록보기
Abstract

The purpose of this study was to investigated quantitative change in emotional intensity ratings with the addition of contextual information in children with Autism Spectrum Disorder(ASD). Participants in the current study were 20 children with ASD and 20 typically developing children(TD) with their full scale IQ and age matched. All participants were asked to assess the emotional intensity of a single emotion(happy/anger) from images presented under two conditions(context-free and context embedded). The results showed a significant interaction effect between groups and conditions in only an anger block. Further analysis revealed that children with TD reported lower intensity of an anger face embedded with happy faces, however, there was no difference in intensity levels for children with ASD inspite of the addition of contextual cues. The results suggests that ASD have impairment in using contextual cues to moderate their assessment of emotional intensity. Clinical implication and limitation of the study is further discussed.

초록보기
Abstract

This study is about generalization of expected value on similar stimulus. From precedent study about perception, it is known that generalization curve of loss context is wider than gain context. In this study, we were interested in the effect of reward context on transfer of expected value in similar stimulus which is easy to be discriminated to original. The experiment was divided to learning phase and decision phase. In learning phase, participants learned association between orientation and rewards, and in decision phase, they valued willingness to bet on stimulus those are similar to learned stimulus. As a results, there was a transfer of expected value on similar stimulus. Also, the strength of willingness to bet was depend on the degree of similarity. There was no significant difference between gain and loss context. However, when modifying condition to be biased to gain, the difference of strength of willing to gamble did not shown. It seems that minor loss cannot cause loss aversion behavior in gain context. Also, even if the gain is much bigger than loss, there is no significant much stronger tendency of gain seeking.

Hyeonbo Yang(Department of Psychology, Pusan National University) ; Donghoon Lee(Department of Psychology, Pusan National University) pp.203-210 https://doi.org/10.22172/cogbio.2018.30.2.007
초록보기
Abstract

According to the psychological construction theory of emotion, labeling an affect promotes the construction of the conceptual representation of facial expressions (Lindquist, MacCormack & Shablack, 2015). In the present study, we investigated the effect of emotion labels on the categorical judgment of emotion of facial stimuli using a psychophysical method. We also tried to compare two conditions under which labels were read aloud or silently to see if the auditory feedback accompanying the utterance would increase the effect. During the experiment, one of three words ('Happiness', 'Anger', and 'Mass') was presented and participants read aloud the word if it was underlined. Then, a target face, randomly chosen from 6 gradually morphed faces from Happy to Angry, was presented for a two-alternative forced choice task ('Happy' or 'Angry'). Using a psychometric function, points of subjective equality (PSEs) of participants were estimated and statistically analyzed. Compared with the non-emotional word, reading an emotional word, 'Happiness' or 'Anger', significantly changed the PSE. Moreover, when the word 'Happiness' was read aloud, the PSE was further biased to Happy. By demonstrating emotion labels change the perceptual category boundary of facial emotion, current results support the claims of Lindquist et al.(2015) that language has an important role in the process of constructing emotion.

The Korean Journal of Cognitive and Biological Psychology