A self-referencing hand-held 3D sensor
Richard Khoury
Patrick Hébert (Supervisor)
The CVSL is developing a hand-held range sensor designed to digitize 3D real objects. In order to integrate range measurements in a global coordinate system without human intervention, this sensor must be able to calculate its own position in space. The principle of self-referencing algorithms, which are presently being used, is based on the observation and tracking of fixed laser reference points projected on the object to be digitized. However, these algorithms often lack robustness or impose a very limited number of reference points for real time treatment.
A sensor which can move freely in space enables the rapid construction of a 3D model of the surface for a real object even if not all of the facets are visible from the same viewpoint or if some facets may be difficult to access. In order to integrate all of the measurements obtained with this sensor, it is however necessary to estimate the movement of the sensor between each image, and consequently provide the position of the sensor in a global coordinate system. A sensor which uses observations to self-reference itself becomes attractive since it limits the dependence on an external positioning device which is precise but costly. Moreover, this should lead to an increase in freedom of movement in the workspace and consequently lead to a greater reduction in modeling time.
At the present time, the sensor uses two different algorithms: the first algorithm tracks the reference points in a continuous sequence while the second algorithm fuses two sequences, which is essential when an interruption occurs. The latter leads to a recognition problem, which is addressed by using the stability of the Delaunay tetrahedrisation, built using each set of reference points. In order to avoid failure during tracking and improve computing performance, the tracking and recognition aspects will be integrated into a hybrid algorithm. Finally, the possibility of extending the system to enable the detection of passive points and to eliminate the need of projecting laser points in all situations will be evaluated.
The self-referencing algorithms which are presently being used impose a limit on the number of reference points (less than 50) for tracking and require one viewpoint in which most points must be visible. These two constraints must be eliminated. However, eliminating the constraint involving the number of points will probably lead to an increase in the complexity of the matching, which is problematic in a real-time system. And to eliminate the second constraint on visibility, one must construct, maintain and continually validate a model of reference points.
A sensor equipped with a robust and precise self-referencing system will be very useful for interactive modeling, by reducing the acquisition and modeling time. Furthermore, in several areas where it is not possible to move an object which is to be modeled (measurements in the field, in archaeology, in areas of forensics or engineering expertise), the flexibility of such a system will be an asset, if not a necessity.
Calendar: September 2002 - September 2004
Last modification: 2007/09/28 by khoury


©2002-. Computer Vision and Systems Laboratory. All rights reserved