In this homework we are going to insert 3D objects into a scene. Image-based lighting is one way to relight an object by capturing an 360 degree panoramic high dynamic range (HDR) photograph of a scene, which provides lighting information from all angles incident to the camera. The HDR image can be generated by combining multiple low dynamic range (LDR) images that captured in different exposures.
In order to generate the HDR radiance map of a scene, we photograph a spherical mirror using different exposures. At least three images with two-stop difference between one shot and another are captured to build the HDR image. We use the method proposed by Debevec and Malik 1997 to combine different exposures into one radiance map. After we have the radiance map, we use Blender to model and render the scene. The detail is introduced below.
First, I'd like to show the example scene by tuning a few parameters in Blender.
An example scene
Take scene1 as example, let's see how to combine LDR images into HDR radiance map. First we captured 5 LDR sphere mirror images with different exposures. The shot with higher exposure has the ablity to capture the dark part of the scene, in this case the exposure time is 4sec. Then I descrease the exposure time by 2-stop until reach to 1/60sec to capture the bright part. The following shows the LDR sphere mirror images cropped from original shots.
4s | 1s | 1/4s | 1/15s | 1/60s |
After we get the different exposures, we can use the method introduced in Debevec and Malik 1997 to combine them into a HDR radiance map. The pixel value in a LDR image is a function of unknown scene radiance and known exposure duration. A inverse function g of the camera was defined as g(Zij) = ln(Ei) + ln(tj) Eqs.1, where Zij is the value at pixel i in image j, Ei radiance for each pixel and tj the exposure time for photo j. The second derivation of g is constraint to 0, i.e. g(x-1)-2*g(x)+g(x+1)=0, to ensure the g is smooth. Note that Ei is constant across the LDR images while the scene is static. We can solve g and E from Eqs.1. However the well exposed pixels in a LDR image provide more trustworthy information. We weight the contribution of each pixel using a function w = @(z) double(127.5-abs(z-127.5)). Where the pixel value is in [0-255]. There is less error in the median value, and if one pixel is brighter or darker should contribute less than the well exposed pixels to understand the scene. Once the g known, we can build the radiance map from the LDR images by weighting Eqs.1, i.e. ln(Ei) = (ln(tj) - g(Zij))*w(Zij). The following images are g curve and the radiance map recovered from LDR images.
HDR radiance map Note: the radiance is in log space. | g function |
Rendering Objects
More Result
|