Implementation of Reinforcement Learning in Adaptive Control of Mobile Robotics
Keywords:
Collision avoidance, Deep Q-Network, Mobile robot, Q-learning, Reinforcement learningAbstract
This study investigates the application of Reinforcement Learning (RL) algorithms, specifically Q-learning and Deep Q-Network (DQN), for autonomous robot navigation in dynamic and uncertain environments. The main problem addressed is the limitation of traditional rule-based control systems in handling real-time environmental changes, including moving obstacles, varying terrains, and inconsistent sensor conditions. The research aims to evaluate the effectiveness of RL algorithms in generating optimal navigation paths, minimizing collision risks, and enhancing the robot’s adaptability to environmental variations. An experimental simulation-based approach was employed using platforms such as Gazebo, Robot Operating System (ROS), and Python-based simulators. The robot was trained through multiple interaction episodes, with state spaces including position, velocity, and obstacle distance, and a reward function designed to encourage safe, efficient, and goal-oriented navigation. Experimental results demonstrate that DQN significantly outperforms Q-learning, achieving shorter average path lengths (10.2 m vs. 12.5 m), lower collision rates (7% vs. 15%), faster convergence (180 vs. 350 episodes), and higher cumulative rewards (315 vs. 210). DQN’s learning curves are smoother and more stable, while Q-learning exhibits high fluctuations due to limited generalization. These findings confirm that DQN provides more efficient, safe, and adaptive navigation and holds substantial potential for next-generation autonomous robots in complex environments. Further integration with strategies such as curriculum learning and multi-agent coordination can enhance scalability and overall robotic system performance.


