Real-time Photorealistic Rendering of Virtual Objects with Real Lighting for Augmented Reality: Application to the “Castelet Électronique”
Bin Hang
Patrick Hébert (Supervisor)
Problem: Augmented Reality (AR) involves the overlay of virtual imagery on the real world image. In many AR applications, a great challenge is the seamless integration between virtual objects and the real world such that a user perceives computer-generated imagery indistinguishable from surrounding real objects and scenery. Virtual objects appear as if they were real objects photographed from the scene. To achieve this photorealistic rendering of virtual objects, applying real scene lighting information to render virtual objects is a key factor. It not only improves the realism of virtual objects details, but it also determines the consistency between the virtual object appearance and surrounding real objects in the scene. This is a great challenge especially when the real lighting environment changes dynamically. This study deals with the acquisition of real world lighting information, as well as the proper way of applying it to the real-time photorealistic rendering of virtual objects that have non-uniform material properties.
Motivation: This project is motivated by the “Castelet Électronique” application. A castelet is a small model of a stage which facilitates the design of theatrical productions. The “Castelet Électronique”, which employs the technology of tele-collaboration and augmented reality, allows artists in different sites to work together on common projects. Local artists can work on the real castelet, while artists working remotely can add, remove, move or modify virtual objects in the scene. The real and virtual parts are combined by way of an augmented reality system to form the complete “Castelet Électronique”, where the show is designed. Rendering the virtual objects with the real lighting gives the virtual objects a more realistic appearance. And more importantly, since good lighting is one of the main factors in a theatrical production, numerous projectors, with fixed positions and directions, are installed inside the castelet. The light colour and intensity of each projector can be totally controlled through computers so as to mimic all kinds of lighting effects. For artists, it is highly desirable to see the right effect on an object when the lighting environment changes, no matter if it is a real object or a virtual one.
Approach: There may be two main approaches to obtain the lighting information. One is to model the characteristics of each projector, including its position, orientation, output light spectrum, etc. We then use this model to construct and apply the virtual lighting at runtime with parameters obtained from real projector control vectors of the castelet. The second approach is to measure light at different places in the castelet without the explicit model of the projectors. In this approach, a light probe, which is usually a small reflective ball, is used together with a video camera capturing images of the probe. Images of the probe will contain the lighting colour and direction information that is further analyzed to render the virtual object [1]. This technique is referred to as Image Based Lighting (IBL). In this work, we will focus on the latter approach. More particularly, we want to study whether it is more appropriate to capture lighting in real-time or to pre-model it offline. We will thus experiment and investigate these two approaches. In real-time capturing, the light probe is placed where the virtual object should be, and images of it can be exploited directly to render the expected lighting effect [2]. The expected problem is the quality of the captured model. In the offline modeling procedure, the castelet is divided into many regions. In each region images of the light probe, sequentially lit by each projector, are taken to calculate a set of light fields through IBL. Then at run-time, according to the projector control vector and the virtual object position, several corresponding light fields are combined to render the object. Although it is more complex to apply, this approach makes it possible to capture off-line high quality images (namely High Dynamic Radiance). The light probe is no longer needed at run-time and more photorealistic rendering is also possible. The pros and cons of each approach will be examined. Finally, for real-time rendering, we will study and exploit GPU programming of shaders.
Challenges: For the real-time approach, the main challenge is to increase the light probe image quality. Since the light probe image is extracted from the scene image and the light probe may move around in the scene, the probe image may be of low and changing resolution. Producing HDR in real-time remains a difficult task. This might result in relatively poor simulation of the real lighting. For the non real-time approach, the model that maps projector colour control vectors to appropriate light field combinations may be complex and less flexible for scene modifications. To find the vector basis that can composite an arbitrary light field of different colour is still a challenge.
Applications: Besides the “Castelet Électronique”, the techniques employed can also be used in other augmented reality applications such as interactive games, education, architecture, city planning, etc.
Expected results: We will develop a method to capture and model the real world lighting in the “Castelet Électronique”. We will also provide a method for real-time rendering of virtual objects in AR scenes. Finally we will collaborate to integrate these techniques into the “Castelet Électronique” project.
Calendar: September 2005 - December 2007
Last modification: Sep 28 2007 1:30PM by binhang


©2002-. Computer Vision and Systems Laboratory. All rights reserved