Bridging vision and commonsense for multimodal situation recognition in pervasive systems

N. Bicocchi, M. Lasagni, F. Zambonelli

Abstract

Pervasive services may have to rely on multimodal classification to implement situation-recognition. However, the effectiveness of current multimodal classifiers is often not satisfactory. In this paper, we describe a novel approach to multimodal classification based on integrating a vision sensor with a commonsense knowledge base. Specifically, our approach is based on extracting the individual objects perceived by a camera and classifying them individually with non-parametric algorithms; then, using a commonsense knowledge base, classifying the overall scene with high effectiveness. Such classification results can then be fused together with other sensors, again on a commonsense basis, for both improving classification accuracy and dealing with missing labels. Experimental results are presented to assess, under different configurations, the effectiveness of our vision sensor and its integration with other kinds of sensors, proving that the approach is effective and able to correctly recognize a number of situations in open-ended environments.
Original languageEnglish
Title of host publication2012 IEEE International Conference on Pervasive Computing and Communications
Number of pages9
PublisherIEEE
Publication date01.03.2012
Pages48-56
Article number6199848
ISBN (Print) 978-1-4673-0256-2
ISBN (Electronic)978-1-4673-0258-6
DOIs
Publication statusPublished - 01.03.2012
Event2012 IEEE International Conference on Pervasive Computing and Communications Workshops,
- Lugano, Switzerland
Duration: 19.03.201223.03.2012
Conference number: 89907

Fingerprint

Dive into the research topics of 'Bridging vision and commonsense for multimodal situation recognition in pervasive systems'. Together they form a unique fingerprint.

Cite this