PGNet: A Part-based Generative Network for 3D object reconstruction

作者:

Highlights:

摘要

Deep-learning generative methods have developed rapidly. For example, various single- and multi-view generative methods for meshes, voxels, and point clouds have been introduced. However, most 3D single-view reconstruction methods generate whole objects at one time, or in a cascaded way for dense structures, which misses local details of fine-grained structures. These methods are useless when the generative models are required to provide semantic information for parts. This paper proposes an efficient part-based recurrent generative network, which aims to generate object parts sequentially with the input of a single-view image and its semantic projection. The advantage of our method is its awareness of part structures; hence it generates more accurate models with fine-grained structures. Experiments show that our method attains high accuracy compared with other point set generation methods, particularly toward local details.

论文关键词:3D reconstruction,Point cloud generation,Part-based,Semantic reconstruction

论文评审过程:Received 25 July 2019, Revised 24 January 2020, Accepted 25 January 2020, Available online 28 January 2020, Version of Record 18 May 2020.

论文官网地址:https://doi.org/10.1016/j.knosys.2020.105574