Deep Outdoor Illumination Estimation

We present a CNN-based technique to estimate high-dynamic range outdoor illumination from a single low dynamic range image. To train the CNN, we leverage a large dataset of outdoor panoramas. We fit a low-dimensional physically-based outdoor illumination model to the skies in these panoramas giving us a compact set of parameters (including sun position, atmospheric conditions, and camera parameters). We extract limited field-of-view images from the panoramas, and train a CNN with this large set of input image--output lighting parameter pairs. Given a test image, this network can be used to infer illumination parameters that can, in turn, be used to reconstruct an outdoor illumination environment map. We demonstrate that our approach allows the recovery of plausible illumination conditions and enables automatic photorealistic virtual object insertion from a single image. An extensive evaluation on both the panorama dataset and captured HDR environment maps shows that our technique significantly outperforms previous solutions to this problem.

Paper

Yannick Hold-Geoffroy, Kalyan Sunkavalli, Sunil Hadap, Emiliano Gambaretto and Jean-Fran├žois Lalonde
Deep Outdoor Illumination Estimation
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
[arXiv pre-print] [BibTeX]

Supplementary material

We provide additional results in this supplementary page.

Code

Online demo comming soon!

Talk

Slides coming soon!

Video

Coming soon!

Acknowledgements

The authors gratefully acknowledge the following funding sources:

  • A FRQ-NT Ph.D. scholarship to Yannick Hold-Geoffroy
  • A generous donation from the Otis-Lalonde fund in computer vision to Yannick Hold-Geoffroy
  • A generous donation from Adobe to Jean-Francois Lalonde
  • NVIDIA Corporation with the donation of the Tesla K40 and Titan X GPUs used for this research.
  • NSERC Discovery GRANT RGPIN-2014-05314
  • REPARTI Strategic Network

Ulaval logo
Adobe logo