Mobile Robot Path Planning using Q-Learning with Guided Distance and Moving Target Concept
Keywords:
Guided distance,, moving target, mobile robot, path planning, Q-learning, reinforcementAbstract
Classical Q-learning algorithm is a reinforcement of learning algorithm that has been applied in path planning of mobile robots. However, classical Q-learning suffers from slow convergence rate and high computational time. This is due to the random decision making for direction during the early stage of path planning. Such weakness curtails the ability of mobile robot to make instantaneous decision in real world application. In this study, the distance aspect and moving target concept were added to Q-learning in order to enhance the direction decision making ability and bypassing dead end. With the addition of these features, Q-learning is able to converge faster and generate shorter path. Consequently, the proposed improved Q-learning is able to achieve average improvement of 29.34-94.85%, 18.29-29.69% and 75.76-99.50% in time used, shortest distance and total distance used, respectively.
Downloads
Downloads
Published
Issue
Section
License
Open access licenses
Open Access is by licensing the content with a Creative Commons (CC) license.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.