Sensor fusion of depth camera and ultrasound data for obstacle detection and robot navigation

D. Forouher, M. G. Besselmann, E. Maehle

22 Zitate (Scopus)

Abstract

Depth cameras have gained much popularity in robotics in recent years. The Microsoft Kinect camera enables a mobile robot to do essential tasks like localization and navigation. Unfortunately, such structured light cameras also suffer from limitations. Exposing them to direct sunlight renders them blind, and transparent objects like glass windows can not be detected. This is a problem for the task of obstacle detection, where false negative measurements must be avoided. At the same time, ultrasound sensors have been studied by the robotic research community for decades. While they have lost attention with the advent of laser scanners and cameras, they remain successful for special applications due to their robustness and simplicity. In this paper we argue that depth cameras and ultrasound sensors extend each other very well. Ultrasound sensors are able to correct the problems inherent to camera-based sensors. We present a sensor fusion algorithm that merges depth camera data and ultrasound measurements using an occupancy grid approach. We validated the algorithm using obstacles in multiple scenarios.
OriginalspracheEnglisch
Titel2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV)
Seitenumfang6
Herausgeber (Verlag)IEEE
Erscheinungsdatum01.11.2016
Seiten1-6
Aufsatznummer7838832
ISBN (elektronisch)978-1-5090-3549-6, 978-1-5090-3550-2
DOIs
PublikationsstatusVeröffentlicht - 01.11.2016
Veranstaltung14th International Conference on Control, Automation, Robotics and Vision - Phuket, Thailand
Dauer: 13.11.201615.11.2016
Konferenznummer: 126282

Fingerprint

Untersuchen Sie die Forschungsthemen von „Sensor fusion of depth camera and ultrasound data for obstacle detection and robot navigation“. Zusammen bilden sie einen einzigartigen Fingerprint.

Zitieren