Abstract
Depth cameras have gained much popularity in robotics in recent years. The Microsoft Kinect camera enables a mobile robot to do essential tasks like localization and navigation. Unfortunately, such structured light cameras also suffer from limitations. Exposing them to direct sunlight renders them blind, and transparent objects like glass windows can not be detected. This is a problem for the task of obstacle detection, where false negative measurements must be avoided. At the same time, ultrasound sensors have been studied by the robotic research community for decades. While they have lost attention with the advent of laser scanners and cameras, they remain successful for special applications due to their robustness and simplicity. In this paper we argue that depth cameras and ultrasound sensors extend each other very well. Ultrasound sensors are able to correct the problems inherent to camera-based sensors. We present a sensor fusion algorithm that merges depth camera data and ultrasound measurements using an occupancy grid approach. We validated the algorithm using obstacles in multiple scenarios.
Original language | English |
---|---|
Title of host publication | 2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV) |
Number of pages | 6 |
Publisher | IEEE |
Publication date | 01.11.2016 |
Pages | 1-6 |
Article number | 7838832 |
ISBN (Electronic) | 978-1-5090-3549-6, 978-1-5090-3550-2 |
DOIs | |
Publication status | Published - 01.11.2016 |
Event | 14th International Conference on Control, Automation, Robotics and Vision - Phuket, Thailand Duration: 13.11.2016 → 15.11.2016 Conference number: 126282 |