Depth Texture Synthesis for Realistic Architectural Modeling

Large scenes such as building facades and other architectural constructions often contain repeating elements like identical windows, brick patterns, etc. In this paper, we present a novel approach that improves the resolution and geometry of 3D meshes of large scenes with such repeating elements. By leveraging structure from motion reconstruction and an off-the-shelf depth sensor, our approach captures a small sample of the scene in high resolution and automatically extends that information to similar regions of the scene. Using RGB and SfM depth information as a guide and simple geometric primitives as canvas, our approach extends the high resolution mesh by exploiting powerful, image-based texture synthesis approaches. The final results improves on standard SfM reconstruction with higher detail. Our approach benefits from reduced manual labor as opposed to full RGBD reconstruction, and can be done much more cheaply than with LiDAR-based solutions.

Journal version (2019)

Félix Labrie-Larrivée, Denis Laurendeau, and Jean-François Lalonde
Depth Texture Synthesis for High Resolution Reconstruction of Large Scenes
Machine Vision and Applications, 30.4, 2019.
[PDF pre-print, 97.6MB] [BibTeX]

Conference version (2016)

Félix Labrie-Larrivée, Denis Laurendeau, and Jean-François Lalonde
Depth Texture Synthesis for Realistic Architectural Modeling
Computer and Robot Vision (CRV), 2016.
[PDF pre-print, 20.3MB] [BibTeX]

Talk

You can download the slides in PPTX format. Please cite the source if you use these slides in a presentation.

Videos

Additional results

Click on the images to show the results at higher resolution.

Acknowledgements

The authors gratefully acknowledge the following funding sources:

  • NSERC/Creaform Industrial Research Chair on 3D Scanning
  • REPARTI Strategic Network

Ulaval logo