Mobile Robot Path Planning using Q-Learning with Guided Distance and Moving Target Concept

Authors

  • Pauline Ong Universiti Tun Hussein Onn Malaysia
  • Ee Soong Low Universiti Tun Hussein Onn Malaysia
  • Cheng Yee Low Universiti Tun Hussein Onn Malaysia

Keywords:

Guided distance,, moving target, mobile robot, path planning, Q-learning, reinforcement

Abstract

Classical Q-learning algorithm is a reinforcement of learning algorithm that has been applied in path planning of mobile robots. However, classical Q-learning suffers from slow convergence rate and high computational time. This is due to the random decision making for direction during the early stage of path planning. Such weakness curtails the ability of mobile robot to make instantaneous decision in real world application. In this study, the distance aspect and moving target concept were added to Q-learning in order to enhance the direction decision making ability and bypassing dead end. With the addition of these features, Q-learning is able to converge faster and generate shorter path. Consequently, the proposed improved Q-learning is able to achieve average improvement of 29.34-94.85%, 18.29-29.69% and 75.76-99.50% in time used, shortest distance and total distance used, respectively.

Downloads

Download data is not yet available.

Downloads

Published

03-12-2020

How to Cite

Ong, P., Low, E. S. ., & Low, C. Y. . (2020). Mobile Robot Path Planning using Q-Learning with Guided Distance and Moving Target Concept. International Journal of Integrated Engineering, 13(2), 177–188. Retrieved from https://publisher.uthm.edu.my/ojs/index.php/ijie/article/view/7639

Most read articles by the same author(s)