Object Detection Using YOLO for Quadruped Robot Manipulation


  • Nor Aiman Alimie Norizan UTHM
  • Mohd Razali Md Tomari
  • Wan Nurshazwani Wan Zakaria UTHM


You Only Look Once(YOLO), Detection Algorithm, Custom Dataset, Google Colab, Robot Operating System(ROS)


This paper focuses on the robot arm's vision system for integration with the quadruped robot. The quadruped robot has high indoor and outdoor maneuverability but lacks the ability to manipulate tasks. The additional robot arm with a good vision system needs to be integrated into the quadruped robot that allows manipulation capabilities. Hence, the objective of this study is to develop and apply custom-trained datasets on different YOLO algorithm architectures for the robot arm’s vision. Prior to the vision system development, a suitable robot arm for the application is identified in which the robot motion and control are developed. The custom datasets preprocessing and training are done is trained using Roboflow and Google Colab respectively.  Two versions of YOLO have been developed for the object detection algorithm which is Yolov3-tiny and Yolov5s. The comparison study in terms of speed, confidence level, and probability of detection is conducted to evaluate both YOLO versions. It was found that the YOLOv5s provide an overall better performance compared to the YOLOv3-Tiny. All the aspects (speed, confidence level, probability) are essential to ensure the robot arm can identify, move and grasp the object efficiently.




How to Cite

Norizan, N. A. A., Md Tomari, M. R., & Wan Zakaria, W. N. (2023). Object Detection Using YOLO for Quadruped Robot Manipulation. Evolution in Electrical and Electronic Engineering, 4(1), 329–336. Retrieved from https://publisher.uthm.edu.my/periodicals/index.php/eeee/article/view/10778



Mechatronics and Robotics