Training my car to see using virtual worlds
作者:
Highlights:
• Virtual worlds can generate photo-realistic images with pixel-wise ground truth.
• Using this data we can train on-board visual perception systems.
• Using domain adaptation such systems can operate in real-world environments.
• Overall, we can avoid cumbersome manual labelling of data up to a large extent.
• The same data can be used for debugging and validating ADAS and autonomous driving.
摘要
•Virtual worlds can generate photo-realistic images with pixel-wise ground truth.•Using this data we can train on-board visual perception systems.•Using domain adaptation such systems can operate in real-world environments.•Overall, we can avoid cumbersome manual labelling of data up to a large extent.•The same data can be used for debugging and validating ADAS and autonomous driving.
论文关键词:ADAS,Autonomous driving,Computer vision,Object detection,Semantic segmentation,Machine learning,Data annotation,Virtual worlds,Domain adaptation
论文评审过程:Received 3 May 2016, Revised 9 April 2017, Accepted 21 July 2017, Available online 5 August 2017, Version of Record 30 November 2017.
论文官网地址:https://doi.org/10.1016/j.imavis.2017.07.007