Adding virtual objects

Castro Cabrera, Ramses


0. Description of the project

The goal of this project is to familiarize yourself with high dynamic range (HDR) imaging, image-based lighting (IBL), and their applications. By the end of this project, you will be able to create HDR images from sequences of low dynamic range (LDR) images and also learn how to composite 3D models seamlessly into photographs using image-based lighting techniques.


1. Introduction

HDR images are widely used by graphics and visual effects artists for a variety of applications, such as contrast enhancement, hyper-realistic art, post-process intensity adjustments, and image-based lighting. We will focus on their use in image-based lighting, specifically relighting virtual objects. One way to relight an object is to capture an 360 degree panoramic (omnidirectional) HDR photograph of a scene, which provides lighting information from all angles incident to the camera (hence the term image-based lighting). Capturing such an image is difficult with standard cameras, because it requires both panoramic image stitching and LDR to HDR conversion. An easier alternative is to capture an HDR photograph of a spherical mirror, which provides the same omni-directional lighting information (up to some physical limitations dependent on sphere size and camera resolution).
HDR image contains a large range of luminance levels, while LDR image has a predefined and limited domain of luminance, e.g. [0,255]. Our camera can just show this limited domain, not a high range domain. Our goal is presenting the large range luminance at one picture(called HDR image).

Part 0: Capturing the images

The first part was to take pictures with th help of the ball in different places. In this way I could have more than a posibility when I was trying to create the HDR image.



First set. Exposures -2,-1, 0, 1, and 2


Second set. Exposures -2,-1, 0, 1, and 2


Part 1: Obtaining the HDR radiance map

We can see in Debevec’s paper, he solves for the irradiance E in the response function g(Z)=ln(Et)=ln(E)+ln(t) where t is the exposure time. The following is the recovered ln(E) (rescaled for visibility) map for each exposure. Ideally, they should all look the same because the only varying factor is the shutter speed t.

P If we want to get de value of g, we have to solve first the linear equations. The linear equation has two terms, the loss function says, and the smooth term says the g curve should be smooth. The smoothness is controled by λ 255. We should select pixels where their intensity value is in the range of [Z_min, Z_max].
By taking photos with different exposures we can select different pixelswith uniform intensity distribution. After, we have to take the values of the pixels from each channel. The function g is estimated for each channel. By using the g curves we can now get the HDR image.


Tone images & g function


First set


Second set


Results: An image of the HDR radiance map, that will be used on the next step.

Part 2: Rendering synthetic objects into photographs


For this part I found some interesting points:
1.- Get a photograph with light makes it interesting the render, and also more real!!
2.- If the light is excesive from the enviorement texture (world label), the render will result with some visual problems.
3.- Using already done models from data pages will help you to reduce timee of work.
4.- It's convenient to make a figure of the surface on which is planned to place the object. Doing "a part" of it will result in a mismatch of colors.
5.- In order to create the composite image in matlab ALL images, the render, the mask, the empty image and even the photograph must be a double variable.
6.- It's interesting the adition of light following the real aspects of the original image. I mean, putting the light at the side that matches with the original image.

I followed the steps in the instruccions so I could get the next results:

Pokemon




Results: An image of a pokeball and pikachu. I made the pokeball reflective so I could use the HDR image.



My room




Results: I have arms im my room



The garden




Results: PIKMIN!!!!



My room




Results: A TIE-fighter at the parking lot



Conclusions

- Using a HDR image is helpful when the task need a high range of exposition features
- Even if it's needed the whole map of images, to solve the g function is a good way to obtain the HDR image based on the color ranges
- Blender is a helpful tool for create animations and virtual objects. I look at the future in virtual reallity - The use of Blender in this project makes a more appreciable show of the basic principes of image processing.

References

- MathWorks documentation: http://www.mathworks.com/help/images/ref/hdrread.html?refresh=true
- High dynamic range imaging and tonemapping http://cybertron.cg.tu-berlin.de/eitz/hdr/
- High Dynamic Range Imaging and Tone Mapping: http://chiakailiang.org/project_hdr/