Deteksi Bola dan Gawang dengan Metode YOLO Menggunakan Kamera Omnidirectional pada Robot KRSBI-B
DOI:
https://doi.org/10.12928/biste.v4i2.6712Keywords:
Robot, Deep Learning, You Only Look Once, YOLO, KRSBI-BAbstract
This research is a form of development of object detection capabilities on wheeled soccer robots using an omnidirectional camera with the You Only Look Once (YOLO) method where the results show that the robot can detect more than one object, namely the ball and the goal on the green field. This study uses the KRSBI-Wheeled UAD robot using an omnidirectional camera as a tool to carry out the detection process and then uses OpenCV 4.0, Deep Learning, and a laptop as a place to create a detection model, as well as balls and goals as objects to be detected. The results obtained from this study are that the two types of YOLO models tested, namely YOLOv3 and YOLOv3-Tiny can detect ball and goal objects in two different types of frame sizes, namely 320x320 and 416x416 which can be seen from the performance of the YOLOv3 model which has an mAP value of 76%. on the 320x320 frame and an mAP value of 87.5% in the 416x416 frame then the YOLOv3-Tiny model has an mAP value of 68.1% in the 320x320 frame and an mAP value of 75.5% in the 416x416 frame where the YOLOv3 model can detect both object class is much more stable compared to YOLOv3-Tiny.
Penelitian ini merupakan bentuk pengembangan dari kemampuan deteksi objek pada robot sepak bola beroda dengan menggunakan kamera omnidirectional dengan metode You Only Look Once (YOLO) dimana hasil penelitian menunjukkan bahwa robot dapat mendeteksi lebih dari satu objek yaitu bola dan gawang di atas lapangan hijau. Penelitian ini menggunakan robot KRSBI-Beroda UAD dengan memakai kamera omnidirectional sebagai alat untuk melakukan proses pendeteksian lalu menggunakan OpenCV 4.0, Deep Learning, dan laptop sebagai tempat membuat model pendeteksian, serta bola dan gawang sebagai objek yang akan dideteksi. Hasil yang didapatkan dari penelitian ini yaitu kedua jenis model YOLO yang diuji yaitu YOLOv3 dan YOLOv3-Tiny dapat mendeteksi objek bola dan gawang pada dua jenis ukuran frame yang berbeda yaitu 320x320 dan 416x416 yang dapat dilihat dari performa pada model YOLOv3 memiliki nilai mAP sebesar 76% pada frame 320x320 dan serta nilai mAP sebesar 87,5% pada frame 416x416 lalu pada model YOLOv3-Tiny memiliki nilai mAP sebesar 68,1% pada frame 320x320 serta nilai mAP sebesar 75,5% pada frame 416x416 yang dimana model YOLOv3 dapat mendeteksi kedua kelas objek jauh lebih stabil dibandingkan dengan model YOLOv3-Tiny.
References
N. O’Mahony et al., “Deep Learning vs. Traditional Computer Vision,” In Science and Information Conference, pp. 128-144, 2019, https://doi.org/10.1007/978-3-030-17795-9_10.
B. Liu, Y. Zhang, D. He and Y. Li, “Identification of Apple Leaf Diseases Based on Deep Convolutional Neural Networks,” Symmetry, vol. 10, no. 1, p. 11, 2017, https://doi.org/10.3390/sym10010011.
Z. He and L. Zhang, “Multi-adversarial faster-rcnn for unrestricted object detection,” In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6668-6677, 2019, https://doi.org/10.48550/arXiv.1907.10343.
B. Cheng et al., “Revisiting RCNN: On Awakening the Classification Power of Faster RCNN,” In Proceedings of the European conference on computer vision (ECCV), pp. 453-468, 2018, https://doi.org/10.48550/arXiv.1803.06799
C. Eggert, S. Brehm, A. Winschel, D. Zecha and R. Lienhart, "A Closer Look: Small Object Detection in Faster R-CNN," 2017 IEEE International Conference on Multimedia and Expo (ICME), pp. 421-426, 2017, https://doi.org/10.1109/ICME.2017.8019550.
H. Jiang and E. Learned-Miller, “Face Detection with the Faster R-CNN,” 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), pp. 650-657, 2017, https://doi.org/10.1109/FG.2017.82.
M. N. Chaudhari, M. Deshmukh, G. Ramrakhiani and R. Parvatikar, “Face Detection Using Viola Jones Algorithm and Neural Networks,” 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), pp. 1-6, 2018, https://doi.org/10.1109/ICCUBEA.2018.8697768.
J. Li et al., “Facial Expression Recognition with Faster R-CNN,” Procedia Computer Science, vol. 107, pp. 135-140, 2017, https://doi.org/10.1016/j.procs.2017.03.069.
P. R. Sihombing and A. M. Arsani, “Comparison of Machine Learning Methods in Classifying Poverty in Indonesia in 2018,” Jurnal Teknik Informatika (JUTIF), vol. 2, no. 1, pp. 51-56, 2021, https://doi.org/10.20884/1.jutif.2021.2.1.52.
J. Du, “Understanding of Object Detection Based on CNN Family and YOLO,” In Journal of Physics: Conference Series, vol. 1004, no. 1, p. 012029, 2018, https://doi.org/10.1088/1742-6596/1004/1/012029.
C. Liu, Y. Tao, J. Liang, K. Li and Y. Chen, “Object Detection Based on YOLO Network,” 2018 IEEE 4th Information Technology and Mechatronics Engineering Conference (ITOEC), pp. 799-803, 2018, https://doi.org/10.1109/ITOEC.2018.8740604.
X. Han, J. Chang and K. Wang, “Real-Time Object Detection Based on YOLO-v2 for Tiny Vehicle Object,” Procedia Computer Science, vol. 183, pp. 61-72, 2021, https://doi.org/10.1016/j.procs.2021.02.031.
Q. Wu and Y. Zhou, “Real-Time Object Detection Based on Unmanned Aerial Vehicle,” 2019 IEEE 8th Data Driven Control and Learning Systems Conference (DDCLS), pp. 574-579, 2019, https://doi.org/10.1109/DDCLS.2019.8908984.
C. -C. Wang, H. Samani and C. -Y. Yang, “Object Detection with Deep Learning for Underwater Environment,” 2019 4th International Conference on Information Technology Research (ICITR), pp. 1-6, 2019, https://doi.org/10.1109/ICITR49409.2019.9407797.
W. Fang, L. Wang and P. Ren, “Tinier-YOLO: A Real-Time Object Detection Method for Constrained Environments,” in IEEE Access, vol. 8, pp. 1935-1944, 2020, https://doi.org/10.1109/ACCESS.2019.2961959.
S. Khalili and A. Shakiba, “A Face Detection Method via Ensemble of Four Versions of YOLOs,” 2022 International Conference on Machine Vision and Image Processing (MVIP), pp. 1-4, 2022, https://doi.org/10.1109/MVIP53647.2022.9738779.
J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7263-7271, 2017, https://doi.org/10.48550/arXiv.1612.08242.
J. Sang et al., “An Improved YOLOv2 for Vehicle Detection,” Sensors, vol. 18, no. 12, p. 4272, 2018, https://doi.org/10.3390/s18124272.
J. Zhang, M. Huang, X. Jin and X. Li, “A Real-Time Chinese Traffic Sign Detection Algorithm Based on Modified YOLOv2,” Algorithms, vol. 10, no. 4, p. 127, 2017, https://doi.org/10.3390/a10040127.
J. Redmon and A. Farhadi, “Yolov3: An Incremental Improvement,” arXiv preprint, vol. 1804, p. 02767, 2018, https://doi.org/10.48550/arXiv.1804.02767.
L. Zhao and S. Li, “Object Detection Algorithm Based on Improved YOLOv3,” Electronics, vol. 9, no. 3, p. 537, 2020, https://doi.org/10.3390/electronics9030537.
A. Mujahid et al., “Real-Time Hand Gesture Recognition based on deep learning YOLOv3 model,” Applied Sciences, vol. 11, no. 9, p. 4164, 2021, https://doi.org/10.3390/app11094164.
R. Liu and Z. Ren, “Application of Yolo on Mask Detection Task,” 2021 IEEE 13th International Conference on Computer Research and Development (ICCRD), pp. 130-136, 2021, https://doi.org/10.1109/ICCRD51685.2021.9386366.
X. Jiang, T. Gao, Z. Zhu and Y. Zhao, “Real-Time Face Mask Detection Method Based on YOLOv3,” Electronics, vol. 10, no. 7, p. 837, 2021, https://doi.org/10.3390/electronics10070837.
T. Santad, P. Silapasupphakornwong, W. Choensawat and K. Sookhanaphibarn, “Application of YOLO Deep Learning Model for Real Time Abandoned Baggage Detection,” In 2018 IEEE 7th Global Conference on Consumer Electronics (GCCE), pp. 157-158, 2018, https://doi.org/10.1109/GCCE.2018.8574819.
K. Priyankan and T. G. I. Fernando, “Mobile Application to Identify Fish Species Using YOLO and Convolutional Neural Networks,” In Proceedings of International Conference on Sustainable Expert Systems, pp. 303-317, 2021, https://doi.org/10.1007/978-981-33-4355-9_24.
J. Zhu et al., “MME-YOLO: Multi-Sensor Multi-Level Enhanced YOLO for Robust Vehicle Detection in Traffic Surveillance,” Sensors, vol. 21, no. 1, p. 27, 2020, https://doi.org/10.3390/s21010027.
B. Benjdira, T. Khursheed, A. Koubaa, A. Ammar and K. Ouni, “Car Detection Using Unmanned Aerial Vehicles: Comparison Between Faster R-CNN and YOLOv3,” In 2019 1st International Conference on Unmanned Vehicle Systems-Oman (UVS), pp. 1-6, 2019, https://doi.org/10.1109/UVS.2019.8658300.
W. J. Chaplin, H. M. Cegla, C. A. Watson, G. R. Davies and W. H. Ball, “Filtering Solar-Like Oscillations for Exoplanet Detection in Radial Velocity Observations,” The Astronomical Journal, vol. 157, no. 4, p. 163, 2019, https://doi.org/10.3847/1538-3881/ab0c01.
G. Buldgen et al., “CORALIE Radial Velocity Search for Companions Around Evolved Stars (CASCADES),” Astronomy & Astrophysics, vol. 657, 2022, https://doi.org/10.1051/0004-6361/202040079.
R. B. Hamarsudi, I. K. Wibowo and M. M. Bachtiar, “Radial Search Lines Method for Estimating Soccer Robot Position Using an Omnidirectional Camera,” In 2020 International Electronics Symposium (IES), pp. 271-276, 2020, https://doi.org/10.1109/IES50839.2020.9231901.
G. C. Felbinger, P. Göttsch, P. Loth, L. Peters and F. Wege, “Designing Convolutional Neural Networks Using a Genetic Approach for Ball Detection,” In Robot World Cup, pp. 150-161, 2018, https://doi.org/10.1007/978-3-030-27544-0_12.
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 Riky Dwi Puriyanto, Farhan Fadhillah Sanubari
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
This journal is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.