A fusion network for road detection via spatial propagation and spatial transformation

作者:

Highlights:

• A novel end-to-end deep fusion network which fuses the multi-modal data in model level for road detection is proposed.

• A simple but efficient method is proposed to process the LiDAR point cloud data with a lightweight deep network.

• The fusion process of image data and point cloud data is modeled with the joint anisotropic diffusion based spatial propagation model.

• The perspective view and the bird view are both considered in the deep network simultaneously for road detection via the spatial transformation model.

• A training approach for the better learning of the proposed fusion network is proposed.

摘要

•A novel end-to-end deep fusion network which fuses the multi-modal data in model level for road detection is proposed.•A simple but efficient method is proposed to process the LiDAR point cloud data with a lightweight deep network.•The fusion process of image data and point cloud data is modeled with the joint anisotropic diffusion based spatial propagation model.•The perspective view and the bird view are both considered in the deep network simultaneously for road detection via the spatial transformation model.•A training approach for the better learning of the proposed fusion network is proposed.

论文关键词:Road detection,Fusion,Spatial propagation,Spatial transformation,Joint anisotropic diffusion

论文评审过程:Received 11 April 2019, Revised 11 October 2019, Accepted 27 November 2019, Available online 28 November 2019, Version of Record 6 January 2020.

论文官网地址:https://doi.org/10.1016/j.patcog.2019.107141