바로가기메뉴

본문 바로가기 주메뉴 바로가기

ACOMS+ 및 학술지 리포지터리 설명회

  • 한국과학기술정보연구원(KISTI) 서울분원 대회의실(별관 3층)
  • 2024년 07월 03일(수) 13:30
 

통제되지 않는 농작물 조건에서 쌀 잡초의 실시간 검출에 관한 연구

Towards Real Time Detection of Rice Weed in Uncontrolled Crop Conditions

한국사물인터넷학회논문지 / Journal of The Korea Internet of Things Society, (P)2466-0078;
2020, v.6 no.1, pp.83-95
https://doi.org/https://doi.org/10.20465/kiots.2020.6.1.083
무하마드 움라이즈 (전북대학교)
김상철 (전북대학교)
  • 다운로드 수
  • 조회수

초록

실제 복잡다난한 농작물 밭 환경에서 잡초를 정밀하게 검출하는 것은 이전의 접근방법들로는 이미지 프레임을 정확하게 처리하는 속도 면에서 부족했다. 식물의 질병 분류 문제가 중요시 되는 상황에서 특히 작물의 잡초 문제는 큰 화제가 되고 있다. 이전의 접근방식들은 빠른 알고리즘을 사용하지만 추론 시간이 실시간에 가깝지 않아 통제되지 않은 조건에서 비현실적인 해결책이 된다. 따라서, 복잡한 벼 잡초 검출 과제에 대한 탐지 모델을 제안한다. 실험 결과에 따르면, 우리의 접근 방식의 추론 시간은 잡초 검출 과제에서 상당한 시간절약을 보여준다. 실제 조건에서 실제로 적용할 수 있는 것으로 나타난다. 주어진 예시들은 쌀의 두 가지 성장 단계에서 수집되었고 직접 주석을 달았다.

keywords
Real Time Weed Detection, Smart Farming using IoT, IoT with Object Detection, Rice Weed Detection, Weed Detection using Deep Learning, 실시간 잡초 검출, 사물인터넷을 활용한 스마트팜, 물체 검출을 이용한 사물인터넷, 쌀 잡초 검출, 딥러닝을이용한 잡초 검출

Abstract

Being a dense and complex task of precisely detecting the weeds in practical crop field environment, previous approaches lack in terms of speed of processing image frames with accuracy. Although much of the attention has been given to classify the plants diseases but detecting crop weed issue remained in limelight. Previous approaches report to use fast algorithms but inference time is not even closer to real time, making them impractical solutions to be used in uncontrolled conditions. Therefore, we propose a detection model for the complex rice weed detection task. Experimental results show that inference time in our approach is reduced with a significant margin in weed detection task, making it practically deployable application in real conditions. The samples are collected at two different growth stages of rice and annotated manually.

keywords
Real Time Weed Detection, Smart Farming using IoT, IoT with Object Detection, Rice Weed Detection, Weed Detection using Deep Learning, 실시간 잡초 검출, 사물인터넷을 활용한 스마트팜, 물체 검출을 이용한 사물인터넷, 쌀 잡초 검출, 딥러닝을이용한 잡초 검출

참고문헌

1.

Chang, Chung-Liang, and Kuan-Ming Lin. “Smart agricultural machine with a computer vision-based weeding and variable-rate irrigation scheme,”Robotics, Vol.7, No.3, pp.38, 2018.

2.

Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton, “Imagenet classification with deep convolutional neural networks,” In Advances in neural information processing systems, pp.1097-1105. 2012.

3.

Deng, J., Dong, W., Socher, R., Li, L.J., Li, K. and Fei-Fei, L., “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition, pp.248-255, 2009.

4.

Wu, Xiongwei, Doyen Sahoo, and Steven CH Hoi. “Recent advances in deep learning for object detection.” Neurocomputing 2020.

5.

Karpathy, Andrej, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei., “Large-scale video classification with convolutional neural networks,” In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp.1725-1732, 2014.

6.

Amer, Gulam, S. M. M. Mudassir, and M. A. Malik, “Design and operation of Wi-Fi agribot integrated system,” In 2015 International Conference on Industrial Instrumentation and Control (ICIC), pp.207-212, 2015.

7.

Hans Grinsted Jensen, Lars-Bo Jacobsen, Søren Marcus Pedersen and Elena Tavella, “Socioeconomic impact of widespread adoption of precision farming and controlled traffic systems in Denmark,” Precision Agriculture, Vol.13, No.6, pp.661-77, 2012.

8.

Mohanty, Sharada P., David P. Hughes, and Marcel Salathé, “Using deep learni ng for image-based plant disease detection,” Frontiers in plant science, Vol.7, pp.1419, 2016.

9.

J.G.Baek and H.W.Lee, “Design and implementation of self-installing Agricultural Automation System for Remote Monitoring and Control Based on LPWA Technology,” Journal of The Korea Internet of Things Society, Vol.3, No.1, pp.13-19, 2017.

10.

J.M.Kim and G.S.Ryu, “Implementation of Intelligent Medical Image Retrieval System HIPS,” Journal of The Korea Internet of Things Society, Vol.2, No.4, pp.15-20, 2016.

11.

María Pérez-Ortizac, José Peña, Pedro Antonio Gutiérrez, Jorge Torres-Sánchez, César Hervás-Martínez and FranciscaLópez-Granadosa, “Selecting patterns and features for between-and within-crop-row weed mapping using UAV-imagery,” Expert Systems with Applications, Vol.47, pp.85-94, 2016.

12.

Francisca López-Granados, Jorge Torres-Sánchez, Ana-Isabel De Castro, Angélica Serrano-Pérez, Francisco-Javier Mesas-Carrascosa & José-Manuel Peña, “Object-based early monitoring of a grass weed in a grass crop using high resolution UAV imagery,”Agronomy for sustainable development, Vol.36, pp.67, 2016.

13.

Alex Olsen, Dmitry A. Konovalov, Bronson Philippa, Peter Ridd, Jake C. Wood, Jamie Johns, Wesley Banks, Benjamin Girgenti, Owen Kenny, James Whinney, Brendan Calvert, Mostafa Rahimi Azghadi and Ronald D. White, “DeepWeeds: A multiclass weed species image dataset for deep learning.” Scientific reports, Vol.9, p.2058, 2019.

14.

Tallha Akram, Syed Rameez Naqvi, Sajjad Ali Haider and Muhammad Kamran, “Towards real-time crops surveillance for disease classification: exploiting parallelism in computer vision,” Computers & Electrical Engineering, Vol. 59, pp.15-26, 2017.

15.

Hemming, J., A. T. Nieuwenhuizen, and L. E. Struik, “Image analysis system to determine crop row and plant positions for an intra-row weeding machine,”CIGR International Symposium on Sustainable Bioproduction, Tokyo, Japan, 2011.

16.

Young, Stephen L., and Francis J. Pierce (Eds), Automation: The future of weed control in cropping systems, Springer Science & Business Media, 2013.

17.

Hossein Behfar, Hamid Reza Ghasemzadeh, Ali Rostami, Hadi Seyedarabi and Mohammad Moghaddam, “Vision-Based Row Detection Algorithms Evaluation for Weeding Cultivator Guidance in Lentil,” Modern Applied Science, Vol.8, No.5, pp. 224-232, 2014.

18.

Negrete, J. C., “Artificial Vision in Mexican Agriculture for Identification of diseases, pests and invasive plants,” Journal of Advanced Plant Science, Vol.1, pp.303, 2018.

19.

Jing-Lei Tang, Xiao-Qian Chen, Rong-Hui and Miao Dong Wang, “Weed detection using image processing under different illumination for site-specific areas spraying,” Computers and Electronics in Agriculture, Vol.122, pp.103-111, 2016.

20.

Yi Sun, Ding Liang, Xiaogang Wang and Xiaoou Tang, “Deepid3: Face recognition with very deep neural networks,” arXiv preprint arXiv:1502.00873, 2015, Online Source https://arxiv.org/abs/1502.00873.

21.

Girshick, Ross, Jeff Donahue, Trevor Darrell, and Jitendra Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp.580-587, 2014.

22.

J. R. R. Uijlings, K. E. A. van de Sande, T. Gevers and A. W. M. Smeulders, “Selective search for object recognition,” International journal of computer vision, Vol.104, pp.154-171, 2013.

23.

He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE transactions on pattern analysis and machine intelligence, Vol.37, No.9, pp.1904-1916, 2015.

24.

Girshick, R., “Fast r-cnn,” in Proceedings of the IEEE international conference on computer vision, pp.1440-1448, 2015.

25.

Ren, S., He, K., Girshick, R., & Sun, J., “Faster r-cnn:Towards real-time object detection with region proposal networks,” in Advances in neural information processing systems, pp.91-99, 2015.

26.

Joseph Redmon, Santosh Divvala, Ross Girshick and Ali Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.

27.

Farhadi, A., and J. Redmon, “YOLO9000: better, faster, stronger,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp.6517-6525, 2017.

28.

Redmon, Joseph, and Ali Farhadi. “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018. Online Source https://arxiv.org/abs/1804.02767.

29.

Lin, Tsung-Yi, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár, “Focal loss for dense object detection,” in Proceedings of the IEEE international conference on computer vision, pp.2980-2988, 2017.

30.

Liu, Wei, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C. Berg., “Ssd: Single shot multibox detector,” in European conference on computer vision, pp.21-37, Springer, Cham, 2016.

한국사물인터넷학회논문지