Deep Reinforcement Learning for Mapless Navigation of Autonomous Mobile Robot

Harsh Yadav, Honghu Xue, Yan Rudall, Mohamed Bakr, Benedikt Hein, Elmar Rueckert, Ngoc Thinh Nguyen*

*Corresponding author for this work

Abstract

This paper presents a study on the mapless navigation of autonomous mobile robot using Deep Reinforcement Learning in an intralogistics setting. The task is to make an autonomous mobile robot learn to navigate to a goal without prior knowledge of the environment. In this paper, a controller using the Soft Actor-Critic algorithm is designed, trained, and applied for navigating the robot equipped with 360° LiDAR and front camera sensors. The controller is successfully validated in an almost fully observable environment under extensive simulations. Furthermore, we investigate the performance of the proposed controller in a partially observable environment and possible limitations. We use a 3D Temporal Convolution Network for processing the time series image data from visual observations. Besides Partial Observability, we also address the problem of sparse positive rewards in training the Deep Reinforcement Learning algorithm with a combined approach of Automatic Curriculum Learning and Dual Prioritized Experience Replay.

Original languageEnglish
Title of host publication2023 27th International Conference on System Theory, Control and Computing (ICSTCC)
Number of pages6
PublisherIEEE
Publication date2023
Pages283-288
ISBN (Print)979-8-3503-3799-0
ISBN (Electronic)979-8-3503-3798-3
DOIs
Publication statusPublished - 2023

Cite this