Interactive 3D model extraction from a single image

作者:

Highlights:

摘要

We present a system at the junction between Computer Vision and Computer Graphics, to produce a three-dimensional (3D) model of an object as observed in a single image, with a minimum of high-level interaction from a user.The input to our system is a single image. First, the user points, coarsely, at image features (edges) that are subsequently automatically and reproducibly extracted in real-time. The user then performs a high level labeling of the curves (e.g. limb edge, cross-section) and specifies relations between edges (e.g. symmetry, surface or part). NURBS are used as working representation of image edges. The objects described by the user specified, qualitative relationships are then reconstructed either as a set of connected parts modeled as Generalized Cylinders, or as a set of 3D surfaces for 3D bilateral symmetric objects. In both cases, the texture is also extracted from the image. Our system runs in real-time on a PC.

论文关键词:Interactive segmentation,NURBS fitting,Bilateral symmetry,Volumetric inference,Three-dimensional modeling

论文评审过程:Received 8 March 1999, Revised 7 July 2000, Accepted 4 September 2000, Available online 27 April 2001.

论文官网地址:https://doi.org/10.1016/S0262-8856(00)00081-0