바로가기메뉴

본문 바로가기 주메뉴 바로가기

logo

Developmental Changes of Top-down Attentional Modulation in Saliency Model

The Korean Journal of Cognitive and Biological Psychology / The Korean Journal of Cognitive and Biological Psychology, (P)1226-9654; (E)2733-466X
2019, v.31 no.4, pp.265-273
https://doi.org/10.22172/cogbio.2019.31.4.001


  • Downloaded
  • Viewed

Abstract

To investigate developmental changes of top-down attentional modulation in visual saliency, the current study examined infants' eye-movements with three models: (1) the equal weight model in which all weights of low-level features are assumed to be equal, (2) the unequal weight model in which feature weights were assumed to unequal, and (3) the unequal weight model with face weight in which an additional weight was assumed for face stimuli, as well as unequal weights between low-level features. These models were fitted to 4-, 6-, and 8-month-old infants' first fixation data to estimate a set of feature weights which can best explain the data. The results showed the unequal weight model and the unequal weight model with face weight predicted infants' eye-movements more accurately than the equal weight model. Also, 6- and 8-month-old infants' eye-movements were significantly better explained by the unequal weight model with face weight than by the unequal weight model. These results suggest that low-level features contribute to the visual system with different weights and that face stimuli attract more attention as infants grow up. Our findings also highlight the importance of adjusting feature weights in studies of visual and attentional development.

keywords
상향적 주의, 하향적 주의, 현저성, 현저성 툴박스, 영아, 발달, 얼굴, bottom-up attention, top-down attention, saliency, saliency toolbox, infants, development, face

Reference

1.

김민식 (1999). 시각적 선택에 대한 신경망 모형: Feature Gate 모형의 하향적 기제, 한국인지과학회지, 10, 1-15.

2.

Amso, D., Haas, S., & Markant, J. (2014). An eye tracking investigation of developmental change in bottom-up attention orienting to faces in cluttered natural scenes. PloS One, 9, e85701.

3.

Amso, D., Haas, S., Tenenbaum, E., Markant, J., & Sheinkopf, S. J. (2013). Bottom-up attention orienting in young children with autism. Journal of Autism and Developmental Disorders, 44, 664-673.

4.

Bruce, N. D., & Tsotsos, J. K. (2009). Saliency, attention, and visual search: An information theoretic approach. Journal of Vision, 9, 1-24.

5.

Cave, K. R., Kim, M-S., Bichot, N. P., & Sobel, K. V. (2005). The FeatureGate model of visual selection. In L. Itti, G. Rees, & J. K. Tsotsos (Eds.), Neurobiology of Attention (pp. 547-552). San Diego, CA: Elsevier.

6.

Di Giorgio, E., Turati, C., Alto, G., & Simion, F. (2012). Face detection in complex visual displays: an eye-tracking study with 3- and 6-month-old infants and adults. Journal of Experimental Child Psychology, 113, 66-77.

7.

Fagan, J. F. (1972). Infants’ recognition memory for faces. Journal of Experimental Child Psychology, 14, 453-476.

8.

Frank, M. C., Amso, D., & Johnson, S. P. (2014). Visual search and attention to faces during early infancy. Journal of Experimental Child Psychology, 118, 13-26.

9.

Frank, M. C., Vul, E., & Johnson, S. P. (2009). Development of infants’ attention to faces during the first year. Cognition, 110, 160-70.

10.

Freund, R. J., Mohr, D., & Wilson, W. J. (2010). Statistical methods (3rd ed.). Burlington, MA: Academic Press.

11.

Gluckman, M., & Johnson, S. P. (2013). Attentional capture by social stimuli in young infants. Frontiers in Psychology, 4, 527.

12.

Harel, J., Koch, C., & Perona, P. (2007). Graph-based visual saliency. Advances in Neural Information Processing Systems, 19, 545.

13.

Itti, L. (2005). Models of bottom-up attention and saliency. In L. Itti, G. Rees, & J. K. Tsotsos (Eds.), Neurobiology of Attention (pp. 576–582). San Diego, CA: Elsevier.

14.

Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 1254–1259.

15.

Johnson, M. H. (2010). Developmental cognitive neuroscience. Oxford: John Wiley & Sons Inc.

16.

Judd, T., Durand, F., & Torralba, A. (2012). A benchmark of computational models of saliency to predict human fixations(Tech. Rep. MIT-CSAIL-TR-2012-001). Cambridge, MA:MIT Computer Science and Artificial Intelligence Laboratory.

17.

Knudsen, E. I. (2007). Fundamental components of attention. Annual Review of Neuroscience, 30, 57-78.

18.

Kwon, M.-K., Setoodehnia, M., Baek, J., Luck, S. J., & Oakes, L. M. (2016). The development of visual search in infancy:Attention to faces versus salience. Developmental Psychology, 52, 537–555.

19.

McNemar, Q. (1947). Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika. 12, 153–157.

20.

Niebur, E., & Koch, C. (1998). Computational architectures for attention, In R. Parasuraman (ed.), The Attentive Brain (pp. 163–186). Cambridge, Mass.: MIT Press.

21.

Nothdurft, H. (2000). Salience from feature contrast: Variations with texture density. Vision Research. 40, 3181–3200.

22.

Theeuwes, J. (2010). Top-down and bottom-up control of visual selection. Acta Psychologica, 135, 77-99.

23.

Walther, D. (2006). Interactions of visual attention and object recognition: Computational modeling, algorithms, and psychophysics. PhD thesis, California Institute of Technology, Pasadena.

24.

Walther, D., & Koch, C. (2006). Modeling attention to salient proto-objects. Neural Networks 19, 1395-1407.

25.

Wang, J., Borji, A., Kuo, C.-C. J., & Itti, L. (2016). Learning a combined model of visual saliency for fixation prediction. IEEE Transactions on Image Processing, 25, 1566-1579.

26.

Wolfe, J. M. (1994). Guided search 2.0: A revised model of visual search. Psychonomic Bulletin & Review, 1, 202–238.

The Korean Journal of Cognitive and Biological Psychology