In this contribution, we present an approach in enhancing 3D objects, which are automatically reconstructed from semantic media in a cloud-based ambient platform. Due to the automatic background process of 3D reconstruction, the objects contain artifacts from the reconstruction process and are not aligned and not position well for direct use in mobile augmented reality apps, such as our InfoGrid system. The goal is to automate the process of enhancing these 3D objects. In our approach we monitor users' interactions with a web-based 3D editor. From these interactions, we derive constraints and show, that for our scenario these parameters can be generalized and applied to other 3D objects, in order to process them automatically in the background. This continues previous work and extends the Network Environment for Multimedia Objects (NEMO), a web-based framework used as the technical platform for our research project Ambient Learning Spaces (ALS). NEMO is the basis for ALS and among other features provides contextualized access and retrieval of semantic media. In various contexts of ALS, compared to still images or video, 3D renderings create higher states of immersion. We conclude this article with a discussion of our findings and with a summary and outlook.
|27 - 32
|Number of pages
|Published - 01.11.2018
|8th International Conference on Ambient Computing, Applications, Services and Technologies - Athens, Greece
Duration: 18.11.2018 → 22.11.2018
|8th International Conference on Ambient Computing, Applications, Services and Technologies
|18.11.18 → 22.11.18