Abstract
This paper presents a real-time gesture-based human-robot interaction (HRI) interface for mobile and stationary robots. A human detection approach is used to estimate the entire 3D point cloud of a human being inside the field of view of a moving camera. Afterwards, the pose of the human body is estimated using an efficient self-organizing map approach. Furthermore, a hand-finger pose estimation approach based on a self-scaling kinematic hand skeleton is presented and evaluated. A trained support vector machine is used to classify 29 hand-finger gestures based on the angles of the finger joints. The HRI interface is integrated into the ROS framework and qualitatively evaluated in a first test scenario on a mobile robot equipped with an RGB-D camera for gesture interaction. Since the hand-finger pose, the hand-finger gesture, as well as the whole body pose are estimated, the interface allows a flexible implementation of various applications.
Original language | English |
---|---|
Title of host publication | Emerging Technologies and Factory Automation (ETFA) |
Editors | Kristian Ehlers, Konstantin Brama |
Publisher | IEEE |
Publication date | 03.11.2016 |
ISBN (Print) | 978-1-5090-1315-9 |
ISBN (Electronic) | 978-1-5090-1314-2 |
DOIs | |
Publication status | Published - 03.11.2016 |
Event | 2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA) - Berlin, Germany Duration: 06.09.2016 → 09.09.2016 https://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=7593665 |