Sensor fusion of depth camera and ultrasound data for obstacle detection and robot navigation

D. Forouher, M. G. Besselmann, E. Maehle

Abstract

Depth cameras have gained much popularity in robotics in recent years. The Microsoft Kinect camera enables a mobile robot to do essential tasks like localization and navigation. Unfortunately, such structured light cameras also suffer from limitations. Exposing them to direct sunlight renders them blind, and transparent objects like glass windows can not be detected. This is a problem for the task of obstacle detection, where false negative measurements must be avoided. At the same time, ultrasound sensors have been studied by the robotic research community for decades. While they have lost attention with the advent of laser scanners and cameras, they remain successful for special applications due to their robustness and simplicity. In this paper we argue that depth cameras and ultrasound sensors extend each other very well. Ultrasound sensors are able to correct the problems inherent to camera-based sensors. We present a sensor fusion algorithm that merges depth camera data and ultrasound measurements using an occupancy grid approach. We validated the algorithm using obstacles in multiple scenarios.
Original languageEnglish
Title of host publication2016 14th International Conference on Control, Automation, Robotics and Vision (ICARCV)
Number of pages6
PublisherIEEE
Publication date01.11.2016
Pages1-6
Article number7838832
ISBN (Electronic)978-1-5090-3549-6, 978-1-5090-3550-2
DOIs
Publication statusPublished - 01.11.2016
Event14th International Conference on Control, Automation, Robotics and Vision - Phuket, Thailand
Duration: 13.11.201615.11.2016
Conference number: 126282

Fingerprint

Dive into the research topics of 'Sensor fusion of depth camera and ultrasound data for obstacle detection and robot navigation'. Together they form a unique fingerprint.

Cite this