Understanding positioning from multiple images

作者:

摘要

It is possible to recover the three-dimensional structure of a scene using only correspondences between images taken with uncalibrated cameras (faugeras 1992). The reconstruction obtained this way is only defined up to a projective transformation of the 3D space. However, this kind of structure allows some spatial reasoning such as finding a path. In order to perform more specific reasoning, or to perform work with a robot moving in Euclidean space, Euclidean or affine constraints have to be added to the camera observations. Such constraints arise from the knowledge of the scene: location of points, geometrical constraints on lines, etc. First, this paper presents a reconstruction method for the scene, then it discusses how the framework of projective geometry allows symbolic or numerical information about positions to be derived, and how knowledge about the scene can be used for computing symbolic or numerical relationships. Implementation issues and experimental results are discussed.

论文关键词:

论文评审过程:Available online 20 April 2000.

论文官网地址:https://doi.org/10.1016/0004-3702(95)00035-6