Human-robot Interaction Interface for Mobile and Stationary Robots based on Real-time 3D Human Body and Hand-finger Pose Estimation

Kristian Ehlers, Konstantin Brama

Abstract

This paper presents a real-time gesture-based human-robot interaction (HRI) interface for mobile and stationary robots. A human detection approach is used to estimate the entire 3D point cloud of a human being inside the field of view of a moving camera. Afterwards, the pose of the human body is estimated using an efficient self-organizing map approach. Furthermore, a hand-finger pose estimation approach based on a self-scaling kinematic hand skeleton is presented and evaluated. A trained support vector machine is used to classify 29 hand-finger gestures based on the angles of the finger joints. The HRI interface is integrated into the ROS framework and qualitatively evaluated in a first test scenario on a mobile robot equipped with an RGB-D camera for gesture interaction. Since the hand-finger pose, the hand-finger gesture, as well as the whole body pose are estimated, the interface allows a flexible implementation of various applications.

Original languageEnglish
Title of host publicationEmerging Technologies and Factory Automation (ETFA)
EditorsKristian Ehlers, Konstantin Brama
PublisherIEEE
Publication date03.11.2016
ISBN (Print)978-1-5090-1315-9
ISBN (Electronic)978-1-5090-1314-2
DOIs
Publication statusPublished - 03.11.2016
Event2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA) - Berlin, Germany
Duration: 06.09.201609.09.2016
https://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=7593665

Fingerprint

Dive into the research topics of 'Human-robot Interaction Interface for Mobile and Stationary Robots based on Real-time 3D Human Body and Hand-finger Pose Estimation'. Together they form a unique fingerprint.

Cite this