Reinforcement Learning Algorithms for Probability Models Based on Mobile Robot Agent Path Planning in Unpredictable Environment
Keywords:
Reinforced learning algorithm, Q-learning algorithm, Markov decision process, PCTL, unpredictable environment, mobile robot agentAbstract
In recent years, the development of reinforcement learning algorithms (RLAs) significantly impacted various fields, including robotics. Mobile robots, which must navigate through unpredictable environments, present a complex challenge that traditional probability model-checking methods often struggle to address under dynamic and uncertain conditions. This research work focuses on modifying the Q-learning algorithm, a type of RLA, which is sequentially applied to establish a probability matrix under uncertain conditions. Subsequently, a probability model is developed with the assistance of the robot agent, selecting positions based on the maximum Q-table value of a matrix size 6x6 as per the assumed environment. The learned behaviour of the mobile robot agent, derived from the Q-learning method, is represented as a Markov Decision Process (MDP) model. To specify the dependability criteria of the mobile agent control system, Probabilistic Computation Tree Logic (PCTL) is employed. Furthermore, the MDP model, along with its designated attributes, is input into the Probabilistic Model Checker (PRISM) to facilitate automated verification. This approach proves effective in determining the goal position and selecting the optimal control model for evaluating performance, feasibility, reliability, and attaining the target point most efficiently. From the PRISM model for the episode 2500, the average reward and average steps obtained was 182.8 and 187.6 respectively. The mobile agent learned from the Q-learning algorithm for PRISM performance achieved a maximum reward of 84.99 and a minimum reward of 61.41 during the simulation.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 International Journal of Integrated Engineering

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Open access licenses
Open Access is by licensing the content with a Creative Commons (CC) license.

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.










