Abstract
Human Computer Interaction based on gestures offers enormous potential for designing ergonomic user interfaces in future smart environments. Although gestures can be perceived as very natural, the specific gesture set of a given dedicated interface might be complex and require some kind of self-description of expected body movements. We have therefore developed a machine-readable XML-based model of Labanotation, a camera-based movement analysis engine for automatic model creation, as well as a graphical editor for supporting manual design of gestures. In this paper, we present a tool for automatic generation of multimodal human-readable gesture documentation based on the XML-model. Currently, the tool supports text and 3D model animation and can be expanded to other modalities.
Original language | English |
---|---|
Title of host publication | HCI International 2016 – Posters' Extended Abstracts |
Editors | Constantine Stephanidis |
Number of pages | 7 |
Volume | 618 |
Place of Publication | Cham |
Publisher | Springer International Publishing |
Publication date | 22.06.2016 |
Pages | 137-143 |
ISBN (Print) | 978-3-319-40542-1 |
ISBN (Electronic) | 978-3-319-40542-1 |
DOIs | |
Publication status | Published - 22.06.2016 |
Event | 18th International Conference on Human-Computer Interaction - Toronto, Canada Duration: 17.07.2016 → 22.07.2016 Conference number: Part II |