ISSN : 1226-9654
Two experiments were performed to examine attentional suppression in transparently moving dot fields. One of the dot groups were moving coherently in a single direction (effector) and the other were moving randomly (contender). A proportion of the random dot group occasionally moved coherently in the orthogonal direction to the effector during motion adaptation period. In ‘passive’ condition, observers viewed the stimulus without performing any task. In ‘attentive’ condition, they were asked to attend to the occasional coherent motion in the contender by reporting the motion direction. The motion aftereffect for the effector was significantly reduced in the attentive condition compared to that in passive condition. This reduction was present even when the proportion of coherent dots in the contender was zero. The similar results were observed when the occasional coherent motion in the contender was 30 deg apart from the effector, which is well within the range of motion integration. These results show that attention to one component of bivectorial motion results in strong suppression of the unattended component as well as enhancement of the attended one. Such suppression in the small angle difference implies that attention to one of the superimposed motion components encourages segregation between different directions rather than integration.
Are human observers incapable of estimating the time-to-contact (TTC) of a tumbling rugby ball while watching it with a single eye, as Gray and Regan (2000) contend? Everyday experiences suggest otherwise. In Gray and Regan’s study, the oval object rotated only 90 deg so that, for a given trial, its projected shape changed either from circle to ellipse or from ellipse to circle depending on the initial orientation of the object. Thus, despite the fact that an infinite variety of optical patterns can be engendered by rotating non-spherical objects, only two types of deformation were depicted in Gray and Regan’s study. For that reason, additional studies are clearly warranted. The present study was conducted directed at perceptual capacity for estimating the TTC of rotating non-spherical objects. Two different response measures, a relative (Experiment 1) and an absolute (Experiment 2) judgment task, in conjunction with three types of objects, a sphere, a rugby ball shaped object, and a disk shaped object, were employed for this purpose. The objects were depicted as texture-mapped images. Even with the surface texture of the objects, the texture elements projected to the observation point were displaced or even disappeared and were replaced by the texture elements hidden behind due to rotation. Nevertheless, performance was accurate across all conditions of object type. That is to say, participants were as accurate in judging TTC of the two non-spherical objects as they were with the spherical object. Moreover, the effects of velocity and size were also observed, consistent with similar effects reported in other TTC studies. Taken together, the results contradicted Gray and Regan's contention and demonstrated that the human visual system is capable of perceiving TTC of rotating non-spherical objects using information extracted from the surface texture of the objects.
Experiments using eye movement to study various aspects of language processes implicitly assume tight links between eye movements and cognitive processes. Based on this assumption, the variability in the measures can be interpreted as reflecting different on-line processes and eye movement measures can be used to infer moment-to-moment cognitive processes. Therefore, most of the recent studies in sentence processing using eye movements have employed various measurements to better understand human sentence processing mechanisms. The different measurements offered by eye movement analysis are valuable for distinguishing the time course of various psycholinguistic processes such as early lexical processes, later structure building processes, and sentence integration processes. However, the results of this study clearly show that certain eye movement measures do not always represent certain cognitive process. Instead, a single cognitive process may be reflected in many eye movement measures, while a group of other cognitive processes may be reflected in only a single eye movement measure. To find various on-line sentence processing patterns or strategies during reading, it is very important to incorporate various eye movement measures into several groups. However it is more important to inspect various eye movement measures individually while considering results from other measures comprehensively. With doing so, we can get better idea about our on-line sentence processing mechanism, which might be impossible to catch otherwise. Also, this approach fit better with the nature of highly cross-related eye movement measures.
Twenty two Koreans and twelve Japanese living in South Korea have been examined for their perceptual identification of an initial consonant in English syllables with or without white noise. A confusion matrix was then subject to analyses of additive clustering, individual difference scaling, and the probability of transmitted information, the results of which were compared to those of four English speakers living in South Korea. Koreans were confused with sounds which have the same place of articulation but the different manner of articulation (closure and continuant) like /ʤ/-/z/, /s/-/ɵ/, /b/-/v/, /d/-/ð/ pairs. Japanese were confused with the sounds which have the same manner of articulation but the different place of articulation like /l/-/r/, /s/-/ɵ/, /z/-/ð/, and /f/-/ɵ/ pairs. The overall results showed that Korean and Japanese who were assumed to have the same difficulties in perceiving English consonants actually have the quite different error patterns with English consonants. These difference might be caused by their perceptual structure with mother languages. This study suggests that the method for acquisition of English phonology should be modified depending on learner's phonology of mother language.
Studies of attention and working memory address that working memory contents guide attention to the memory-matching object in the scene. The present study investigated whether familiarity of working memory contents modulates the memory-based attention allocation. We measured the attention allocation by comparing response times (RT) for memory-matching or non-matching probes while maintaining either novel or familiar object in working memory. When a novel object was maintained in working memory, probe RTs at the memory-match object were significantly faster than those on non-match object (Experiment 1). However, when participants maintained a familiar or highly learned object in working memory, there was no probe RT advantage for the memory-match object (Experiments 2, 3, and 4). These results demonstrate that working memory does not automatically bias attention towards the memory-matching item; instead, the bias was present only for novel working memory contents. Thus, the guidance of attention by working memory contents could be due to a top-down strategy where participants re-sample the memory item in the visual array in order to reduce the cognitive complexity of working memory maintenance.