Learning to Estimate Indoor Lighting from 3D Objects

In this work, we propose a step towards a more accurate prediction of the environment light given a single picture of a known object. To achieve this, we developed a deep learning method that is able to encode the latent space of indoor lighting using few parameters and that is trained on a database of environment maps. This latent space is then used to generate predictions of the light that are both more realistic and accurate than previous methods. To achieve this, our first contribution is a deep autoencoder which is capable of learning the feature space that compactly models lighting. Our second contribution is a convolutional neural network that predicts the light from a single image of a known object. To train these networks, our third contribution is a novel dataset that contains 21,000 HDR indoor environment maps. The results indicate that the predictor can generate plausible lighting estimations even from diffuse objects.

Paper

Henrique Weber, Donald Prévost, and Jean-François Lalonde
Learning to Estimate Indoor Lighting from 3D Objects
International Conference on 3D Vision 2018
[arXiv:1806.03994 pre-print] [BibTeX]

Code

Our code is available on github.

Models and data

We also provide pre-trained models as well as training data for 3 models (bunny-diffuse, dragon-glossy and buddha-roughplastic).

Poster

Video

Acknowledgements

The authors gratefully acknowledge the following funding sources:

  • INO excellence scholarship
  • NSERC Discovery Grant RGPIN-2014-05314
  • NVIDIA Corporation with the donation of the Tesla K40 and Titan X GPUs used for this research
  • REPARTI Strategic Network

Ulaval logo
INO logo