바로가기메뉴

본문 바로가기 주메뉴 바로가기

logo

Vol.11 No.2

Mikyong Sim(KRISS (Korea Research & Institute of Standard & Science)) ; Jungsun Yoon(KRISS (Korea Research & Institute of Standard & Science)) ; Kanghee Lee(Department of Psychology, Chungnam National University) pp.93-105
초록보기
Abstract

In two experiments the perception of the ceiling-height, comfortable ceiling- height, human sensibility of a room in a virtual environment were investigated. While the brightness of wall-color had no effect on perception of ceiling height, it affected the human sensibility of a room. Height of subjects didn't influence perception of the ceiling-height but did influence the perception of a comfortable ceiling-height. The result that there was a significant difference in comfortable ceiling height along the subject height was congruent with the intrinsic body scaled information of Warren and Mark. 2.496m in small group and 2.678m in tall group were discovered as the comfort ceiling-height from our experiment. π for small group was 1.64, and π for tall group was 1.54. The artificial eye-height which has 18.3cm difference with subjects' eye-height had no effect on either of the perception of the ceiling-height and the perception of the comfortable ceiling-height. The artificial eye-height didn't yield any significant difference in perception of the comfortable ceiling-height conflicted with the results of the previous research of Warren and Mark. It was conjectured that the difference might comes from the fact that the previous experiments used the peephole and allowed the subjects to use the monocular vision only whereas the present experiments allowed the subjects explore the room as much as possible in the virtual environment.

Kichun Nam(Department of Psychology, Korea University) ; Yunkyoung Shin(Department of Psychology, Korea University) ; Yoonhyoung Lee(Department of Psychology, Korea University) ; Yumi Whang(Department of Linguistics, Korea University) ; Jaeuk Lee(Department of Korean and Literature, Korea University) ; Skrypiczajko Greg(The Institute of Foreign Language Studies, Korea University) pp.107-130
초록보기
Abstract

The present study examined the role of orthographic and phonological information for Koreans and Americans in recognizing written English words using the primed-lexical decision task. It is well known that the phonological and orthographic form priming occurs when recognizing words of one's native language. Many previous studies have shown that this form-priming effect occurs because the phonological and orthographic similarity between the prime and target strings affects the processing of the targets at the pre-lexical or lexical access stage. The purpose of the current study was to examine whether the form priming effect found in the native language is exhibited when recognizing foreign language words, and to examine whether this kind of form priming is due to the phonological or orthographic similarity. In Experiment 1, non-word primes were presented, and in order to manipulate the relationship between the prime and target letter strings there were 3 conditions; the phonologically similar, orthographically similar, and unrelated conditions. For Americans, both phonological and orthographic similarity facilitated the target word processing at 100msec SOA, whereas at 1000msec SOA only phonological similarity facilitated recognition of target words. However, in the case of Koreans, only at KXlmsec SOA, only the orthographic similarity facilitated target word recognition. The results of Experiment 1 suggest that Korean readers do not use the phonological information in recognizing English words. In Experiment 2, the prime words were used to examine word form-related processing at the lexical level. American subjects showed inhibitory effects on target word recognition by the phonological and orthographic similarity between the primes and targets at short SOA, whereas Korean subjects revealed a significant inhibitory priming effect by the orthographic similarity and exhibited a facilitatory trend by the phonological similarity at the 100msec SOA. The faults of Experiment 2 imply that Koreans do not use the orthographic information for lexical access in English word recognition and they do not actively use the pre-lexical phonological information. In conclusion, Korean students do not actively use phonology-related information at the pre-lexical and lexical levels for recognizing English words, and they recognize English words mainly by utilizing orthographic information.

Yoon-Ki Min(Sejong University) ; Chang-Won Seo(Chungnam Nat'l University) ; Soo-Khil Shin(Sejong University) pp.131-152
초록보기
Abstract

This study investigated the relationship between perceived distance and loudness. The phenomenon of visual capture was used to manipulate the apparent location of certain sound sources in an environment which included multiple visual and auditory sources varying in direction and distance. In experiment 1, a Near sound source was located 10° to the left of the listener's midline at a distance of 2m; a Far source was located 10° to the right, at a distance of 5m. In experiment 2, three auditory sources were located 15° to the left and right, and at the midline, all at a distance of 3m. Three auditory distance cues (sound-level, frequency spectrum and reverberation) were available to determine the perceived depth of the test sounds. The perceived distances of all sources were effectively modified in experimental conditions by presentation of some of the visual stimuli. The results indicated a tendency for the perceived loudness of the sounds to be positively associated with their perceived distance, despite the absence of any physical change in the sounds.

Yang-Gyu Choi(Department of Early Chilhood Special Therapeutic Education, Choonhae College) ; Hyun Jung Shin(Department of Psychology, Pusan National University) pp.153-169
초록보기
Abstract

The purpose of this study was to critically evaluate the normalization models and to suggest the exemplar models as alternatives in explaining the processes of speaker variability in vowel perception. While the normalization models treat the speaker variability as a noise to be removed, the exemplar models use it as one of the important informations in vowel perception. A simulation study was conducted to quantitatively contrast the normalization models and the exemplar models(GCM and ALCOVE) by fitting them to the Peterson & Barney(1952)'s data. The quantitative fits of the exemplar models to the Peterson and Barney(1952)'s data of vowel and gender identification were compared with those of the normalization models. The results showed that predictions of the exemplar models are much better than those of the normalization models. It was suggested that the exemplar models can be more effective modules in processing of the speaker variability in vowel perception. Limitations of the present study and further research problems were discussed in the final section.

Do-Joon Yi(Department of Psychology, Yonsei University) ; Min-Shik Kim(Department of Psychology, Yonsei University) pp.171-184
초록보기
Abstract

Two visual search experiments were conducted to investigate whether the line motion illusion results from the local facilitation of a line stimulus in the gradient of attention. In Experiment 1, color-defined search elements were presented as sudden-onset stimuli, and a line appeared between them. Subjects showed no difference between a target and distractors in inducing an illusory projecting sensation from a line following them. In Experiment 2, search element were presented through partial offset of premasks in order to prevent attentional capture by abrupt onset, and two stimulus onset asynchronies (SOAs) were applied to measure a temporal profile of the line motion illusion. The 150msec SOA condition produced the same result as Experiment 1, whereas the target induced more amount of illusion than the distractors with 250msec SOA. These results provide an evidence that spatial attention is not a necessary condition of the line motion illusion, suggesting that the line motion illusion may be a byproduct of binding process between a preceding stimulus and a subsequent line.

Yunkyung Shin(Department of Psychology, Korea University) ; Kichun Nam(Department of Psychology, Korea University) pp.185-197
초록보기
Abstract

The current study was planned to examine the source of difficulties in English speech perception in Koreans: the question is to determine whether perception difficulties in nonnative speech sounds occur due to the detection processing of the acoustic features or due to the mapping processing from acoustic features to phonological representations. Many previous studies reported that young infants can perceive the nonnative sound features, but after acquiring the native language, adults became insensitive to the nonnative sounds. However, other studies showed that adults having difficulty in perceiving the nonnative speech sounds could discriminate different acoustic features not existing in the native language and their perceiving ability improved after practice in perceiving the nonnative speech sounds. In this study, subjects were asked to decide whether the initial sounds of two stimulus sounds were same or not. Two sound stimuli were presented consecutively and the standard stimulus preceded the prove stimulus. In one condition(or lexical context condition), the preceding stimulus was a word. And in the other condition(or non-linguistic condition), that stimulus was a syllable length sound. In both conditions, the probe stimulus was a phoneme length sound. We reasoned that if Korean subjects' difficulty in perceiving English sounds comes from the incorrect mapping from the acoustic features of English sounds to the phonological representation and if the phonological representation in mental lexicon for English words is based on the Korean phonology structure, Korean subjects would perform this discrimination task better in the non-linguistic condition than in the linguistic condition. The result showed that in the linguistic condition subjects made more errors and slower reaction time than in the non-linguistic condition. And this result indicates that Korean subjects' difficulty in perceiving English sounds occurs due to the incorrect mapping from the acoustic features of English sounds to the phonological representation.

Hyung-Chul Li(Department of Industrial Psychology, Kwangwoon University) pp.199-209
초록보기
Abstract

The compelling percept of three-dimensionality of a transparent rotating cylinder is attainable from the displays which are purely motion-defined. Interestingly, subjects rarely perceive the rotation direction reversals of the cylinder which are physically introduced (Treue, Andersen, Ando, & Hildreth, 1995; Li, 1996). Treue et al interpret this result as showing the possibility that the local feature information is not available any more after the surface interpolation occurs. To test this possibility, subjects performances of perceiving rotation reversals were compared in two different conditions: segregated condition vs. unsegregated condition. In the segregated condition, the front/back surfaces were segregated by the type of micropatterns (orientation, spatial frequency or luminance polarity), but they were not in the unsegregated condition. Subjects perceived much more rotation direction reversals in the segregated condition than in the unsegregated condition. When the rotation reversals were not physically introduced but the local feature types were exchanged, subjects perceived much more illusory rotation reversals in the segregated condition. These results imply that the visual system sensitively responds to the local feature types and the front/back surfaces of the cylinder are labeled by the type of micropatterns after the surface interpolation occurs.

Micha Park(Yonsei University) ; Tae-Jin Park(Chonnam National University) pp.211-225
초록보기
Abstract

The present study was conducted to investigate the level-of-processing effects on the contributions of recollection and familiarity in recognition memory. We manipulated levels of processing(physical processing vs. semantic processing) and stimulus format(word vs. picture) utilizing the process-dissociation procedure. The effects of the two variables were measured by estimating recollection and familiarity in recognition memory. Levels of processing had a significant effect on both estimation of recollection and familiarity. Semantic processing at the study phase increased the estimation of familiarity as well as recollection compared to physical processing. However, the pattern of dissociation among estimation of recollection and familiarity were not identical across levels of processing and stimulus format. Picture stimuli showed higher estimation of recollection than word stimuli in both semantic and physical processing condition. However, picture did not differ from word in terms of estimation of familiarity in the physical processing condition. In contrast, word showed higher estimation of familiarity than picture in the semantic processing condition. The present findings are not consistent the process-dissociation framework which assume that recollection and familiarity are independent and familiarity in recognition mediates perceptual process. The results rather support the proposal that familiarity in recognition may be more sensitive to conceptual processing.

Sang Wook Hong(Department of Psychology, Yonsei University) ; Chan Sup Chung(Department of Psychology, Yonsei University) pp.227-241
초록보기
Abstract

In three experiments, the effects of facial expression, change in facial expression, and the experience of diverse facial expressions on the face recognition of a person were investigated. Facial expression affected response criterion in the direction of raising both hit and false alarm rates in recognition tasks. Such an effect was more salient in 'negative' facial expressions. When a face with an emotional expression in the learning phase was shifted to the neutral one in the test phase, recognition rate was higher than the opposite case. Learning with diverse facial expressions resulted in higher recognition rate than learning repeatedly with a single facial expression. Such a trend was more salient when the to-be-learned face was presented dispersedly than successively. In conclusion, the results of the experiments imply that facial expression affects response criterion for face recognition and diverse facial expressions facilitate learning of a face through the requirement of normalization.

Tae-Jin Park(Chonnam National University) ; Micha Park(Yonsei University) pp.243-259
초록보기
Abstract

A process-dissociation procedure (Jacoby, 1991) was used to separate automatic and consciously controlled influences of memory in Korean perceptual and conceptual word completion tests, Modality-shift (Exp. 1) and level of processing (Exp. 2) were manipulated at study. Automatic influences of memory (a) were observed at the same modality condition but not observed at the cross modality condition (modality effect) for the perceptual word completion test but (b) were observed at both modality conditions (no modality effect) for the conceptual word completion test and (c) remained invariant across semantic and perceptual processing conditions (no effect of level of processing) for both tests. Controlled influences of memory (a) showed no modality effect and (b) showed effect of level of processing. These results provide evidence that (a) both perceptual and conceptual word completion are often contaminated by consciously controlled influences of memory and (b) automatic influences of memory are highly dependent on perceptual processing at study for the perceptual word completion test but not dependent for the conceptual word completion test and (c) controlled influences of memory are highly dependent on conceptual processing but not dependent on perceptual processing at study for both tests.

Jae-ho Lee(Chung-Ang University) ; Jung-Mo Lee(Sung Kyun Kwan University) pp.261-276
초록보기
Abstract

This study was conducted to investigate the on-line effect of predictive inferences in text comprehension. In Experiment 1, using lexical decision task, it was found that the condition of predictive inference was faster than control condition. In. Experiment 2, using naraing task, there was no difference between conditions. In Experiment 3, using self-paced sentence reading task, it was found that the condition of creditive inference was faster than control condition. The result of three experiments suggested predictive inference was occued on-line during reading. This results were explained by elaborative position and situation models perspective.

The Korean Journal of Cognitive and Biological Psychology