The study examines the interactive effect of spoken words and imagery not synchronized in audio/image surrogates for video gisting. To do that, we conducted an experiment with 64 participants, under the assumption that participants would better understand the content of videos when viewing audio/image surrogates rather than audio or image surrogates. The results of the experiment showed that overall audio/image surrogates were better than audio or image surrogates for video gisting, although the unsynchronized multimedia surrogates made it difficult for some participants to pay attention to both audio and image when the content they present is very different.
김현희. 2007. 비디오 자료의 의미 추출을 위한 영상 초록의 효용성에 관한 실험적 연구. 정보관리학회지 , 24(4): 53-72.
김현희. 2009. 비디오의 오디오 정보 요약 기법에 관한 연구. 정보관리학회지 , 26(3): 169-188.
이경미 외. 2008. 내용, 감성, 메타데이터의 결합을 이용한 텍스타일 영상 검색. 한국인터넷정보학회논문집 , 9(5): 99-108.
Ding, W., et al. 1999. “Multimodal surrogates for video browsing." Proceedings of the fourthACM Conference on Digital Libraries, 85-93. Berkeley, CA.
Gunther, R., Kazman, R., & MaccGregor, C. 2004. “Using 3D sound as a navigational aidin virtual environments.” Behaviour and Information Technology, 23(6): 435-446.
Hughes, A., et al. 2003. “Text or pictures? an eye-tracking study of how people view digitalvideo surrogates.” Proceedings of CIVR 2003, 271-280.
Iyer, H., & Lewis, C. 2007. “Prioritization strategies for video storyboard keyframes." Journalof American Society for Information Science and Technology, 58(5): 629-644.
Kennedy, L., Naaman, M., Ahern, S., Nair, R., & Rattenbury, T. 2007. “How Flickr helpsus make sense of the world: Context and content in community-contributed media collections."Proceedings of ACM Multimedia, 2007. Augsburg, Germany. [online]. [cited 2010.5.10].<http://infolab.stanford.edu/~mor/research/kennedyMM07.pdf>.
Kristin, B., et al. 2006. Audio Surrogation for Digital Video: A Design Framework. UNCSchool of Information and Library Science(SILS) Technical Report TR 2006-21.
Marchionini, G., et al. 2009. “Multimedia surrogates for video gisting: Toward combiningspoken words and imagery." Information Processing and Management, 45: 615-630.
Paivio, A. 1986. Mental Representations. New York: Oxford University Press.
Schmandt, C., & Mullins, A. 1995. “AudioStreamer: Exploiting simultaneity for listening."CHI 95 Conference Companion 1995, 218-219.
Song, Y., & Marchionini, G. 2007. “Effects of audio and visual surrogates for making senseof digital video." Proceedings of CHI 2007, 867-876. San Jose, CA, USA.
Song, Y., Marchionini, G., & Oh, C. 2010. “What are the most eye-catching and ear-catchingfeatures in the video?: implications for video summarization." Proceedings of the 19thInternational Conference on World Wide Web 2010. Raleigh, North Carolina.
Wildemuth, B., et al. 2002. “Alternative surrogates for video objects in a digital library:Users' perspectives on their relative usability." Proceedings of the 6th European Conferenceon Digital Libraries, 493-507. New York: Springer.
Yang, M., 2005. An Exploration of Users’ Video Relevance Criteria. Ph.D. diss., Universityof North Carolina at Chapel Hill.
Yang, M., & Marchionini, G. 2004. “Exploring users' video relevance criteria: A pilot study."Proceedings of the Annual Meeting of the American Society of Information Science andTechnology, Nov. 12-17, 2004. 229-238. Providence, RI.