Commonsense visual sensemaking for autonomous driving – On generalised neurosymbolic online abduction integrating vision and semantics

作者:

摘要

We demonstrate the need and potential of systematically integrated vision and semantics solutions for visual sensemaking in the backdrop of autonomous driving. A general neurosymbolic method for online visual sensemaking using answer set programming (ASP) is systematically formalised and fully implemented. The method integrates state of the art in visual computing, and is developed as a modular framework that is generally usable within hybrid architectures for realtime perception and control. We evaluate and demonstrate with community established benchmarks KITTIMOD, MOT-2017, and MOT-2020. As use-case, we focus on the significance of human-centred visual sensemaking —e.g., involving semantic representation and explainability, question-answering, commonsense interpolation— in safety-critical autonomous driving situations.

论文关键词:Cognitive vision,Deep semantics,Declarative spatial reasoning,Knowledge representation and reasoning,Commonsense reasoning,Visual abduction,Answer set programming,Autonomous driving,Human-centred computing and design,Standardisation in driving technology,Spatial cognition and AI

论文评审过程:Received 19 July 2020, Revised 29 April 2021, Accepted 30 April 2021, Available online 7 May 2021, Version of Record 27 May 2021.

论文官网地址:https://doi.org/10.1016/j.artint.2021.103522