바로가기메뉴

본문 바로가기 주메뉴 바로가기

ACOMS+ 및 학술지 리포지터리 설명회

  • 한국과학기술정보연구원(KISTI) 서울분원 대회의실(별관 3층)
  • 2024년 07월 03일(수) 13:30
 

logo

  • P-ISSN1013-0799
  • E-ISSN2586-2073
  • KCI

영상 초록 구현을 위한 키프레임 추출 알고리즘의 설계와 성능 평가

Design and Evaluation of the Key-Frame Extraction Algorithm for Constructing the Virtual Storyboard Surrogates

정보관리학회지 / Journal of the Korean Society for Information Management, (P)1013-0799; (E)2586-2073
2008, v.25 no.4, pp.131-148
https://doi.org/10.3743/KOSIM.2008.25.4.131
김현희 (명지대학교)

초록

본 연구에서는 비디오의 의미를 잘 표현하고 있는 키프레임들을 추출하는 알고리즘을 설계하고 평가하였다. 구체적으로 영상 초록의 키프레임 선정을 위한 이론 체계를 수립하기 위해서 선행 연구와 이용자들의 키프레임 인식 패턴을 조사하여 분석해 보았다. 그런 다음 이러한 이론 체계를 기초로 하여 하이브리드 방식으로 비디오에서 키프레임을 추출하는 알고리즘을 설계한 후 실험을 통해서 그 효율성을 평가해 보았다. 끝으로 이러한 실험 결과를 디지털 도서관과 인터넷 환경의 비디오 검색과 브라우징에 활용할 수 있는 방안을 제안하였다.

keywords
image, video, storyboard, surrogate, sense making, 키프레임 추출 알고리즘, 영상 초록, 비디오, 디지털 도서관, 키프레임, 하이브리드 방식, 랜덤 방식, 요약, 색인, image, video, storyboard, surrogate, sense making

Abstract

The purposes of the study are to design a key-frame extraction algorithm for constructing the virtual storyboard surrogates and to evaluate the efficiency of the proposed algorithm. To do this, first, the theoretical framework was built by conducting two tasks. One is to investigate the previous studies on relevance and image recognition and classification. Second is to conduct an experiment in order to identify their frames recognition pattern of 20 participants. As a result, the key-frame extraction algorithm was constructed. Then the efficiency of proposed algorithm(hybrid method) was evaluated by conducting an experiment using 42 participants. In the experiment, the proposed algorithm was compared to the random method where key-frames were extracted simply at an interval of few seconds(or minutes) in terms of accuracy in summarizing or indexing a video. Finally, ways to utilize the proposed algorithm in digital libraries and Internet environment were suggested.

keywords
image, video, storyboard, surrogate, sense making, 키프레임 추출 알고리즘, 영상 초록, 비디오, 디지털 도서관, 키프레임, 하이브리드 방식, 랜덤 방식, 요약, 색인, image, video, storyboard, surrogate, sense making

참고문헌

1.

김원준. (2008). 새로운 비디오 자막 영역 검출 기법. 방송공학회 논문지, 13(4), 544-553.

2.

김종성. (2005). 내용기반 비디오 요약을 위한 효율적인 얼굴 객체 검출. 한국통신학회논문지C, 30(7), 675-686.

3.

김현희. (2007). 비디오 자료의 의미 추출을 위한 영상 초록의 효용성에 관한 실험적 연구. 정보관리학회지, 24(4), 53-72.

4.

신성윤. (2006). 텔레매틱스에서 효율적인 장면전환 검출기법을 이용한 비디오 브라우징. 한국컴퓨터정보학회논문지, 11(4), 147-154.

5.

李重鏞. (2003). 샷 기여도와 왜곡률을 고려한 키 프레임 추출 알고리즘. 전자공학회논문지 - CI, 40(5), 11-17.

6.

Browne, P. (2005). Video Retrieval Using Dialogue, Keyframe Similarity and Video Objects (11-14). ICIP 2005 -International Conference on Image Processing.

7.

Choi, Y. (2002). User's Relevance Criteria in Image Retrieval in American History. Information Pro- cessing and Management, 38(5), 695-726.

8.

Chung, E. K. (2008). A Cat- egorical Comparison between User- supplied Tags and Web Search Queries for Images (-). Proceedings of the ASIST Annual Meeting. Silver Spring. American Society for Information Science and Technology.

9.

Ding, W. (1999). Multimodal Surrogates for Video Browsing (85-93). Proceedings of the fourth ACM Conference on Digital Libraries.

10.

Dufaux,F. (2000). Key Frame Selection to Re- present a Video (275-278). IEEE Proceed- ings of International Conference on Im- age Processing.

11.

Greisdorf, H. (2002). Modelling What Users See When They Look at Images: a Cognitive Viewpoint. Journal of Documentation, 59(1), 6-29.

12.

Hughes, A. (2003). Text or Pictures? an Eye-tracking Study of How People View Digital Video Surrogates (271-280). Proceedings of CIVR 2003.

13.

Kristin, B. (2006). Audio Surrogation for Digital Video: a Design Framework. UNC School of Information and Library Science(SILS).

14.

Laine-Hermandez, M. (2008). Multifaceted Image Similarity Criteria as Revealed by Sorting Tasks (-). Pro- ceedings of the ASIST Annual Meeting. Silver Spring. American Society for Information Science and Technology.

15.

Lyer, H. (2007). Prioritization Strategies for Video Storyboard Key- frames. Journal of American Society for Information Science and Technology, 58(5), 629-644.

16.

Marchionini, G. (2008). The Open Video Digital Library. D-Lib Magazine, 8(12), -.

17.

Markkula, M. (1998). Sear- ching for Photos - Journalistic Practices in Pictorial IR (-). The Challenge of Image Retrieval. Newcastle upon Tyne, 5-6 Feb 1998. British Computer Society(BCS), Elec- tronic Workshops in Computing..

18.

Mu, X. (2003). Enriched Video Semantic Metadata: Authorization, Integration, and Presentation (-). Pro- ceedings of the ASIST Annual Meeting: 316-322. Silver Spring,. American Society for Information Science and Technology.

19.

Nagasaka, A. (1992). Automatic Video Indexing. and Full-Video Search for Object Appearances. Visual Data- base Systems, 2, 113-127.

20.

Panofsky,E. (1955). Meaning in the Visual Arts: Meaning in and on Art History:Doubleday.

21.

Shatford,S. (1986). Analyzing the Subject of a Picture: a Theoretical Approach. Cataloging & Classification Quarterly, 6(3), 39-62.

22.

Yang,M. (2005). An Exploration of Users’ Video Relevance Criteria.

23.

Yang, M. (2004). Exploring Users' Video Relevance Criteria - A Pilot Study (229-238). Proceedings of the ASIST Annual Meeting: 229-238. Silver Spring. American Society for Information Science and Technology.

정보관리학회지