HW5: Adding Virtual Objects

Overview

In this homework we are going to insert 3D objects into a scene. Image-based lighting is one way to relight an object by capturing an 360 degree panoramic high dynamic range (HDR) photograph of a scene, which provides lighting information from all angles incident to the camera. The HDR image can be generated by combining multiple low dynamic range (LDR) images that captured in different exposures.

Method

In order to generate the HDR radiance map of a scene, we photograph a spherical mirror using different exposures. At least three images with two-stop difference between one shot and another are captured to build the HDR image. We use the method proposed by Debevec and Malik 1997 to combine different exposures into one radiance map. After we have the radiance map, we use Blender to model and render the scene. The detail is introduced below.

Result

First, I'd like to show the example scene by tuning a few parameters in Blender.

An example scene

Scene1

Take scene1 as example, let's see how to combine LDR images into HDR radiance map. First we captured 5 LDR sphere mirror images with different exposures. The shot with higher exposure has the ablity to capture the dark part of the scene, in this case the exposure time is 4sec. Then I descrease the exposure time by 2-stop until reach to 1/60sec to capture the bright part. The following shows the LDR sphere mirror images cropped from original shots.

LDR Sphere Mirror

4s
1s
1/4s
1/15s
1/60s

After we get the different exposures, we can use the method introduced in Debevec and Malik 1997 to combine them into a HDR radiance map.
The pixel value in a LDR image is a function of unknown scene radiance and known exposure duration. A inverse function g of the camera was defined as g(Zij) = ln(Ei) + ln(tj) Eqs.1, where Zij is the value at pixel i in image j, Ei radiance for each pixel and tj the exposure time for photo j. The second derivation of g is constraint to 0, i.e. g(x-1)-2*g(x)+g(x+1)=0, to ensure the g is smooth. Note that Ei is constant across the LDR images while the scene is static. We can solve g and E from Eqs.1.
However the well exposed pixels in a LDR image provide more trustworthy information. We weight the contribution of each pixel using a function w = @(z) double(127.5-abs(z-127.5)). Where the pixel value is in [0-255]. There is less error in the median value, and if one pixel is brighter or darker should contribute less than the well exposed pixels to understand the scene. Once the g known, we can build the radiance map from the LDR images by weighting Eqs.1, i.e. ln(Ei) = (ln(tj) - g(Zij))*w(Zij). The following images are g curve and the radiance map recovered from LDR images.

HDR radiance map
Note: the radiance is in log space.
g function
Then we use the radiance map as light source to render 3D virtual objects in Blender. The objects used in this homework are from turbosquid.com. First we model the scene and render the objects then we got an image R. To insert the rendered object to the original image I, we generate the object mask M. Then we can insert the rendered objects into the original image by: M.*R + (1-M).*I. We also want to capture the shadows and reflections, we render the scene without objects, i.e. we just render the plane/ground in the scene, we have image E. The term (1-M).*(R-E).*c is the shadows and reflections.
Rendering Objects

More Result

Scene2
Notice the different whitebalance between objects and background.
Scene3
Scene4
Notice the shadow color tends to red.
This can be fixed by using an achromatic material for the plane. Note: the light intensity of the fixed scene is dfferent from the above one.
I was trying to implement a local tonemapping operator. The implementation could map a HDR to LDR, but the color of the LDR seems wrong.
Result


HDR with different exposure

memorial.hdr
Result


HDR with different exposure

Barcelona_Rooftops