In this assignment the goal is to insert virtual objects into real pictures. We do that by first creating an HDR image of the scene combining several LDR images. In the middle of this scene we place a spherical mirror which is a quick way to capture the light in the environment. In the end we render synthetic objects into the photograph and use the HDR picture as an environment map to illuminate the virtual models.

All these steps will be illustrated bellow together with the results.

Exposure time: 1/25s.

Exposure time: 1/60s.

Exposure time: 1/125s.

Exposure time: 1/200s.

Exposure time: 1/500s.

Now to recover the value of the function $ g(Z_{ij}) = ln(f^{-1}(Z_{ij})$ for pixel $i$ in image $j$ we can construct a system of linear equations where $Z_{ij}$ and $\Delta t_j$ are known. For that we randomly chose some pixels and take their intensity and exposure times on all images in the sequence. So for example if we pick 4 pixels and we have 5 images we will have 20 equations that will give us the value of $g$ given the intensities of the 4 pixels at different exposures. Since this approach uses the pixel intensities directly we have to account for saturated values like 0 and 255. To go around this issue we can give weights to each sampled pixel regarding its intensity. In this assignment I used the indicated MATLAB function

`w = @(z) double(128-abs(z-128))`

which gives the maximum weight for a pixel that has intensity of 127.5 and linearly goes to zero as it approches the extreme values.
After preparing the system of linear equations we have an overdetermined system, which can be solved with SVD, for example. The result will be the value of $g$ for all discrete values in the interval [0,255]. To recover the radiance for the pixel located at $i,j$ and channel $c$, it suffices to compute the equation 6 from Debevec's paper, which can be translated to MATLAB notation as `Radiance(i,j,c) = exp(sum(w(Z_R).*g(Z_R+1)-B,2)./sum(w(Z_R),2))`

where `w`

is the weight function, `Z_R`

is a vector with the intensity of the pixel for each exposure and B is a vector with the exposure related to each picture. Bellow we can see the function $g$ and the resulting radiance map from the same sequence shown above, which were generated using the MATLAB function `imagesc`

. Function $g$ using 300 pixels and lambda = 50.

Recovered radiance for the color Red.

Recovered radiance for the color Green.

Recovered radiance for the color Blue.

`composite = M.*R + (1-M).*(I + (R-E).*c)`

where `R`

is the rendered image with objects and local scene geometry, `E`

is the rendered scene without objects, `I`

is the background and `M`

is a mask outlining the objects position in the image. `c`

is a constant to modulates the lighting effects of the inserted models.
My first render uses the same objects with the same properties as the file provided by the assignment. The table was modeled as a plane with diffuse BSDF. But finding the good color for the table was really difficult since I could not find any single color that matched the whole variation of red the fabric presented. For any color I picked it was easy to see the discontinuity in the mirror ball, so I decided to use the background image as a texture (actually a crop of it with a projective transformation using TP4) to the plane following this simple tutorial. Another advantage of using texture is that the reflection (specially in the mirror ball) looks more realistic than assigning just one color to the plane.

Empty scene.

Texture used for the plane.

Rendered image without objects.

Rendered image with objects.

Mask.

Final result.

Empty scene.

Rendered image without objects.

Rendered image with objects.

Mask.

Final result.

Empty scene.

Rendered image without objects.

Rendered image with objects.

Mask.

Final result.

Empty scene.

Texture used for the plane.

Rendered image without objects.

Rendered image with objects.

Mask.

Final result.

Empty scene.

Texture used for the plane.

Rendered image without objects.

Rendered image with objects.

Mask.

Final result.