Lifelong robotic visual-tactile perception learning

作者:

Highlights:

• We develop a new Lifelong Visual-Tactile Learning (LVTL) model to learn a sequence of robotic visual-tactile perception tasks continuously. To the best of our knowledge, this is an earlier exploration about robotic visual-tactile cross-modality learning under the lifelong learning manner.

• We design a modality-specific knowledge library for each modality to capture common intra-modality knowledge across different tasks, by preserving the shared experience information of the learned and new coming robotic visual-tactile tasks.

• A sparse constraint based modality-invariant space is constructed to explore the shared complementary knowledge across visual and tactile modalities, and identify the importance of each modality for the new coming robotic visual-tactile tasks simultaneously.

摘要

•We develop a new Lifelong Visual-Tactile Learning (LVTL) model to learn a sequence of robotic visual-tactile perception tasks continuously. To the best of our knowledge, this is an earlier exploration about robotic visual-tactile cross-modality learning under the lifelong learning manner.•We design a modality-specific knowledge library for each modality to capture common intra-modality knowledge across different tasks, by preserving the shared experience information of the learned and new coming robotic visual-tactile tasks.•A sparse constraint based modality-invariant space is constructed to explore the shared complementary knowledge across visual and tactile modalities, and identify the importance of each modality for the new coming robotic visual-tactile tasks simultaneously.

论文关键词:Lifelong machine learning,Robotics,Visual-tactile perception,Cross-modality learning,Multi-task learning

论文评审过程:Received 27 July 2020, Revised 30 October 2020, Accepted 14 July 2021, Available online 15 July 2021, Version of Record 29 July 2021.

论文官网地址:https://doi.org/10.1016/j.patcog.2021.108176