바로가기메뉴

본문 바로가기 주메뉴 바로가기

logo

Development and Distribution of Deep Fake e-Learning Contents Videos Using Open-Source Tools

The Journal of Distribution Science / The Journal of Distribution Science, (P)1738-3110; (E)2093-7717
2022, v.20 no.11, pp.121-129
https://doi.org/https://doi.org/10.15722/jds.20.11.202211.121
HO, Won
WOO, Ho-Sung
LEE, Dae-Hyun
KIM, Yong

Abstract

Purpose: Artificial intelligence is widely used, particularly in the popular neural network theory called Deep learning. The improvement of computing speed and capability expedited the progress of Deep learning applications. The application of Deep learning in education has various effects and possibilities in creating and managing educational content and services that can replace human cognitive activity. Among Deep learning, Deep fake technology is used to combine and synchronize human faces with voices. This paper will show how to develop e-Learning content videos using those technologies and open-source tools. Research design, data, and methodology: This paper proposes 4 step development process, which is presented step by step on the Google Collab environment with source codes. This technology can produce various video styles. The advantage of this technology is that the characters of the video can be extended to any historical figures, celebrities, or even movie heroes producing immersive videos. Results: Prototypes for each case are also designed, developed, presented, and shared on YouTube for each specific case development. Conclusions: The method and process of creating e-learning video contents from the image, video, and audio files using Deep fake open-source technology was successfully implemented.

keywords
e-Learning, Deep learning, Deep fake, Open-source, Distribution

Reference

1.

Brand, M. (1999). Voice puppetry. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, 21-28.

2.

Chen, L., Maddox, R. K., Duan, Z., & Xu, C. (2019). Hierarchical cross-modal talking face generation with dynamic pixel-wise loss. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 7832-7841.

3.

Eri, W. I., & Susiana., (2019). Using the ADDIE model to develop learning material for actuarial mathematics. Journal of Physics: Conference Series, 1188. DOI:10.1088/1742-6596/1188/1/012052.

4.

Ginosar, S., Bar, A., Kohavi, G., Chan, C., Owens, A., & Malik, J. (2019). Learning individual styles of conversational gesture. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3497-3506.

5.

Karras, T, Aila, T. Laine, S., Herva, A. & Lehtinen, J. (2017). Audiodriven facial animation by joint end-to-end learning of pose and emotion. ACM Transactions on Graphics (ToG). 36(4), 94:1-94:12.

6.

Kim, Y. (2017). A Study on e-learning Contents Opening Information for Distribution Industry Labor Competence, Journal of Distribution Science, 15(8), 65-73. DOI:10.15722/jds.15.8. 201708.65.

7.

Kim, Y. (2018). A Design of Human Cloud Platform Framework for Human Resources Distribution of e-Learning Instructional Designer, Journal of Distribution Science, 16(7), 67-75. doi:10.15722/jds.16.7.201807. 67.

8.

Kizilcec, R., Bailenson, J., & Gomez, C. (2015). The Instructor's Face in Video Instruction: Evidence From Two Large-Scale Field Studies. Journal of Educational Psychology, 107(3), 724–739. https://doi.org/10.1037/edu0000013.

9.

Lee, D. H., Kim, Y., & You, Y. Y. (2018). Learning window design and implementation based on Moodle-Based interactive learning activities, Indian Journal of Public Health Research and Development, 9(8), 626-632. DOI:10.5958/0976-5506.2018.00803.3.

10.

Mio, C., Ventura-Medina, E. & João, E. (2019). Scenario‐based eLearning to promote active learning in large cohorts: Students' perspective. Computer Applications in Engineering Education. 27(4), 894-909. 10.1002/cae.22123.

11.

Shiratori, T., Nakazawa, A., & Ikeuchi, K. (2006). Dancing-to-music character animation. In Computer Graphics Forum, 25(3), 449-458. doi:10.1111/j.1467-8659.2006.00964.x.

12.

Suwajanakorn, S., Seitz, S. M., & Kemelmacher-Shlizerman, I. (2017). Synthesizing obama: learning lip sync from audio. ACM Transactions on Graphics (ToG), 36(4), 95:1-95:13.

13.

Taylor, S., Kim, T., Yue, Y., Mahler, M., Krahe, J., Rodriguez, A. G., ... & Matthews, I. (2017). A deep learning approach for generalized speech animation. ACM Transactions on Graphics (ToG), 36(4), 93:1-93:11.

14.

Thies, J., Elgharib, M., Tewari, A., Theobalt, C., & Nießner, M. (2020). Neural voice puppetry: Audio-driven facial reenactment. In European conference on computer vision, 716-731. Springer, Cham.

15.

Vougioukas, K., Petridis, S., & Pantic, M. (2020). Realistic Speech-Driven Facial Animation with GANs, International Journal of Computer Vision, 128, 1398–1413.

16.

W, H., Lee, D. H., & Kim, Y. (2021). Implementation of an Integrated Online Class Model using Open-Source Technology and SNS, International Journal on Informatics Visualization, 5(3), 218-223. 10.30630/joiv.5.3.668.

17.

Zhou, Y., Han, X., Shechtman, E., Echevarria, J., Kalogerakis, E., & Li, D. (2020). Makelttalk: speaker-aware talking-head animation. ACM Transactions on Graphics (TOG), 39(6), 1-15.

18.

Zhou, Y., Xu, Z., Landreth, C., Kalogerakis, E., Maji, S., & Singh, K. (2018). Visemenet: Audio-driven animator-centric speech animation. ACM Transactions on Graphics (TOG), 37(4), 1:1-1:10.

The Journal of Distribution Science