Cross-view image synthesis using geometry-guided conditional GANs
作者:
Highlights:
•
摘要
We address the problem of generating images across two drastically different views, namely ground (street) and aerial (overhead) views. Image synthesis by itself is a very challenging computer vision task and is even more so when generation is conditioned on an image in another view. Due to the difference in viewpoints, there is small overlapping field of view and little common content between these two views. Here, we try to preserve the pixel information between the views so that the generated image is a realistic representation of cross view input image. For this, we resort to homography as a guide to map the images between the views based on the common field of view to preserve the details in the input image. We then use generative adversarial networks to inpaint the missing regions in the transformed image and add realism to it. Our exhaustive evaluation and model comparison demonstrate that utilizing geometry constraints adds fine details to the generated images and can be a better approach for cross view image synthesis than purely pixel based synthesis methods.
论文关键词:
论文评审过程:Received 18 July 2018, Revised 18 July 2019, Accepted 26 July 2019, Available online 2 August 2019, Version of Record 4 September 2019.
论文官网地址:https://doi.org/10.1016/j.cviu.2019.07.008