Deep Drifting: Autonomous Drifting of Arbitrary Trajectories using Deep Reinforcement Learning

Fabian Domberg, Carlos Castelar Wembers*, Hiren Patel, Georg Schildbach

*Corresponding author for this work

Abstract

In this paper, a Deep Neural Network is trained using Reinforcement Learning in order to drift on arbitrary trajectories which are defined by a sequence of waypoints. In a first step, a highly accurate vehicle simulation is used for the training process. Then, the obtained policy is refined and validated on a self-built model car. The chosen reward function is inspired by the scoring process of real life drifting competitions. It is kept simple and thus applicable to very general scenarios. The experimental results demonstrate that a relatively small network, given only a few measurements and control inputs, already achieves an outstanding performance. In simulation, the learned controller is able to reliably hold a steady state drift. Moreover, it is capable of generalizing to arbitrary, previously unknown trajectories and different driving conditions. After transferring the learned controller to the model car, it also performs surprisingly well given the physical constraints.

Original languageEnglish
Title of host publication2022 International Conference on Robotics and Automation (ICRA)
Number of pages6
PublisherIEEE
Publication date05.2022
Publication statusPublished - 05.2022

Research Areas and Centers

  • Centers: Center for Artificial Intelligence Luebeck (ZKIL)

Fingerprint

Dive into the research topics of 'Deep Drifting: Autonomous Drifting of Arbitrary Trajectories using Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this