Inertial positioning module for improved robustness of reference point tracking in a sequence of images
Martin Labrie
Patrick Hébert (Supervisor)
Problem: Matching of reference points observed in different images is a problem which concerns several areas of computer vision, especially the 3D reconstruction of scenes or objects. The main problems are erroneous matching and failure during tracking. The goal of this project is to create a hardware and software module of inertial 3D positioning so as to automate and improve this matching procedure. This module must be compact and portable on different types of equipment.
Motivation: This project arises from a matching problem encountered with the hand-held range sensor developed in the Computer Vision and Systems Laboratory (CVSL). This sensor tracks different reference points in images so as to determine its relative movement in space. Another application of this module involves the estimation of the position in space of a camera so as to match image points and thus facilitate the reconstruction of a 3D model of a photographed scene or object. This projects thus combines aspects of computer vision and photogrammetry. With the availability of new low-cost electronic components, the design of an inertial positioning device will be reviewed and the device will be integrated with software which improves automatic matching of reference points in 3D reconstruction.
Approach: A simple approach has been used with the CVSL sensor. A cloud of 3D reference points is continuously updated in the global reference frame. If the new position of the sensor is estimated using the module, then it is possible to foresee the position of a certain number of reference points in an image. For a single camera, it is also possible to infer the displacement of a point in the image by approximating the 3D displacement provided by the module.
Challenges: The CVSL sensor enables a matching of points due to the tracking between images. One must thus allow the sensor to rapidly and efficiently locate different reference points so that the sensor re-evaluates its current position in space. The matching algorithms which are presently used require a large amount of computing time and sometimes require human intervention. The process must thus be accelerated and automated using a positioning device. The greatest challenge involves the integration of the system for the 3D reconstruction of a scene or object from the photographs taken thereof.
Applications: There are numerous applications for this project. The modeling of objects, scenes, buildings or even streets is in great demand. For example, the 3D reconstruction of a site from photographs taken thereof could be carried out and used so as to transmit the information. Researchers in the military field are very interested in this type of technology which can be used to model urban environments to simulate interventions.
Calendar: September 2002 - September 2004
Last modification: Sep 28 2007 2:33PM by mlabrie


©2002-. Computer Vision and Systems Laboratory. All rights reserved