The goal of this project is to familiarize yourself with high dynamic range (HDR) imaging, image-based lighting (IBL), and their applications. By the end of this project, you will be able to create HDR images from sequences of low dynamic range (LDR) images and also learn how to composite 3D models seamlessly into photographs using image-based lighting techniques.
HDR images are widely used by graphics and visual effects artists for a variety of applications,
such as contrast enhancement, hyper-realistic art, post-process intensity adjustments, and image-based
lighting. We will focus on their use in image-based lighting, specifically relighting virtual objects.
One way to relight an object is to capture an 360 degree panoramic (omnidirectional) HDR photograph of a
scene, which provides lighting information from all angles incident to the camera (hence the term image-based
lighting). Capturing such an image is difficult with standard cameras, because it requires both panoramic
image stitching and LDR to HDR conversion. An easier alternative is to capture an HDR photograph of a
spherical mirror, which provides the same omni-directional lighting information (up to some physical
limitations dependent on sphere size and camera resolution).
HDR image contains a large range of luminance levels, while LDR image has a
predefined and limited domain of luminance, e.g. [0,255]. Our camera can just
show this limited domain, not a high range domain. Our goal is presenting the large
range luminance at one picture(called HDR image).
The first part was to take pictures with th help of the ball in different places. In this way I could have more than a posibility when I was trying to create the HDR image.
First set. Exposures -2,-1, 0, 1, and 2
Second set. Exposures -2,-1, 0, 1, and 2
We can see in Debevec’s paper, he solves for the irradiance E in the response function g(Z)=ln(Et)=ln(E)+ln(t) where t is the exposure time. The following is the recovered ln(E) (rescaled for visibility) map for each exposure. Ideally, they should all look the same because the only varying factor is the shutter speed t.
P If we want to get de value of g, we have to solve first the linear equations. The linear equation has two terms, the loss function says, and the smooth term says the g curve should be smooth. The smoothness is controled by λ
Tone images & g function
First set
Second set
Pokemon
Results: An image of a pokeball and pikachu. I made the pokeball reflective so I could use the HDR image.
My room
Results: I have arms im my room
The garden
Results: PIKMIN!!!!
My room
Results: A TIE-fighter at the parking lot
- Using a HDR image is helpful when the task need a high range of exposition features
- Even if it's needed the whole map of images, to solve the g function is a good way to obtain the HDR image based on the color ranges
- Blender is a helpful tool for create animations and virtual objects. I look at the future in virtual reallity
- The use of Blender in this project makes a more appreciable show of the basic principes of image processing.
- MathWorks documentation: http://www.mathworks.com/help/images/ref/hdrread.html?refresh=true
- High dynamic range imaging and tonemapping http://cybertron.cg.tu-berlin.de/eitz/hdr/
- High Dynamic Range Imaging and Tone Mapping: http://chiakailiang.org/project_hdr/