Learning Hierarchical Acquisition Functions for Bayesian Optimization

Nils Rottmann, Tjasa Kunavar, Jan Babic, Jan Peters, Elmar Rueckert


Learning control policies in robotic tasks requires a large number of interactions due to small learning rates, bounds on the updates or unknown constraints. In contrast humans can infer protective and safe solutions after a single failure or unexpected observation. In order to reach similar performance, we developed a hierarchical Bayesian optimization algorithm that replicates the cognitive inference and memorization process for avoiding failures in motor control tasks. A Gaussian Process implements the modeling and the sampling of the acquisition function. This enables rapid learning with large learning rates while a mental replay phase ensures that policy regions that led to failures are inhibited during the sampling process. The features of the hierarchical Bayesian optimization method are evaluated in a simulated and physiological humanoid postural balancing task. The method outperforms standard optimization techniques, such as Bayesian Optimization, in the number of interactions to solve the task, in the computational demands and in the frequency of observed failures. Further, we show that our method performs similar to humans for learning the postural balancing task by comparing our simulation results with real human data.
Original languageEnglish
Number of pages7
Publication statusPublished - 2020
EventProceedings of International Conference on Intelligent Robots and Systems 2020 - Las Vegas, United States
Duration: 25.10.202029.10.2020


ConferenceProceedings of International Conference on Intelligent Robots and Systems 2020
Abbreviated titleIROS 2020
Country/TerritoryUnited States
CityLas Vegas


Dive into the research topics of 'Learning Hierarchical Acquisition Functions for Bayesian Optimization'. Together they form a unique fingerprint.

Cite this