Positioning of a hand-held range sensor
Ying Wang
Patrick Hébert (Supervisor)
Problem: To capture the geometry of an object, it is often necessary to mount the object and/or the range sensor on a translation or rotation stage. Multiple images must be captured from different views to get a complete object scan, especially when top and bottom views are required. Thus a 3D digitizing system based on a laser range sensor should be more flexible, portable and easy to use. This has led to the birth of the hand-held sensor. Such a sensor combines a range sensor with a positioning device allowing the sensor to be moved around the object and the measurements automatically integrated in a common global coordinate system. Estimating a precise position of the sensor in real-time becomes the key problem.
Motivation: Positioning devices include mechanical, electromagnetic, optical and inertial systems. All of these systems have their limitations, such as freedom of motion (mechanical devices), precision and accuracy (electromagnetic devices), visibility (optical devices), and time integration error (inertial devices). Furthermore, one must pay special attention to synchronization and calibration of these devices with the range sensor. In order to remove these limitations and improve the freedom of motion, the desire of reducing the dependency on the position device arises naturally, i.e. the sensor could be self-referenced from observations. This is the aim of this project.
Approach: In order to position the sensor in a global coordinate system, a set of reference points are projected in the scene using a fixed and independent projector. The imaging sensor then captures both the laser pattern - for surface measurements - and reference points simultaneously. One advantage of this is that no physical target has to be placed on the object, the other is the possibility to project a set of points on one or several selected areas. The main steps are as follows:
  • Tracking in a sequence while the sensor is continuously moving;
  • Registration of two independent sequences;
  • Integration (moving all of the individual frames in a global coordinate system).
Challenges: The challenge of this project lies in correspondence, i.e. 2D and 3D correspondence. For 2D correspondence between a stereo pair, the epipolar constraint is exploited. For 3D correspondence between two sets of 3D reference points, the rigidity constraint is used. The result of the 2D correspondence influences the 3D correspondence directly. Thus, 2D correspondence is of primary concern.
Applications: There are many applications for robustly and accurately computing the camera motion, such as 3D model building, object tracking and augmented reality.
Calendar: September 2000 - September 2002
Last modification: Sep 26 2007 10:56AM by yingwang


©2002-. Computer Vision and Systems Laboratory. All rights reserved