Abstract
Domain adaptation techniques enable the re-use and transfer of existing labeled datasets from a source to a target domain in which little or no labeled data exists. Recently, image-level domain adaptation approaches have demonstrated impressive results in adapting from synthetic to real-world environments by translating source images to the style of a target domain. However, the domain gap between source and target may not only be caused by a different style but also by a change in viewpoint. This case necessitates a semantically consistent translation of source images and labels to the style and viewpoint of the target domain. In this work, we propose the Novel Viewpoint Adaptation (NoVA) model, which enables unsupervised adaptation to a novel viewpoint in a target domain for which no labeled data is available. NoVA utilizes an explicit representation of the 3D scene geometry to translate source view images and labels to the target view. Experiments on adaptation to synthetic and real-world datasets show the benefit of NoVA compared to state-of-the-art domain adaptation approaches on the task of semantic segmentation.
Original language | English |
---|---|
Title of host publication | 2019 International Conference on 3D Vision (3DV) |
Number of pages | 10 |
Publisher | IEEE |
Publication date | 09.2019 |
Pages | 116-125 |
Article number | 8885955 |
ISBN (Print) | 978-1-7281-3132-0 |
ISBN (Electronic) | 978-1-7281-3131-3 |
DOIs | |
Publication status | Published - 09.2019 |
Event | 7th International Conference on 3D Vision - Quebec, Canada Duration: 15.09.2019 → 18.09.2019 Conference number: 153712 |